A Guide to Twitter and Social Media Safety for Academics (and Everyone Else)

Paula R. Curtis
May 20, 2022

At the start of 2021, I created A Guide to Best Twitter Practices for Academics (and Everyone Else), which recommended ways to effectively and respectfully use Twitter. As I have written elsewhere (and is generally apparent to anyone who engages online), not every person using social media is doing so with good intentions. This article is an extension of the Academics Online series of virtual events on digital harassment in Asian Studies hosted through the Association for Asian Studies’ Digital Dialogues. Although public-facing work and digital engagement are increasingly demanded of educators, students, and various types of workers, they are not a standard part of professional development skills or curricula, nor is how to navigate these online spaces and to protect oneself in the process. This article therefore offers practical safety tips for managing one’s Twitter experience, whether you’re learning how to recognize questionable users, in need of help mass blocking trolls, or thinking more broadly about security.

You can navigate to any subsection of the article via the links below. If you want to return to these quicklinks, you can use the “Top ▲” button on the bottom right of the page to jump back to beginning at any time:

Why Do We Need to Know This? Don’t Harassers Get Bored?
Why Twitter?
Spotting Suspicious Accounts
Common Antagonistic Tactics
Do We (Not) Respond?
Gradations of Disengagement
Doxxing
Putting Yourself First
Not All of Social Media is a Dumpster Fire


Why Do We Need to Know This? Don't Harassers Get Bored?

If you attended our previous Academics Online sessions, particularly Session 1, then you may already have a good sense of the danger that extremist activism can pose, even if you do not personally use social media. Session 1 (for which videos are available) featured scholars of China, India, Japan, and Hong Kong discussing the use of digital spaces for targeted harassment of scholars, journalists, and others who publicly speak on hotly debated issues. Right wing populism, historical denialism, and conspiracy theories can be fueled by online communities that seek to radicalize and grow their followers. Scholars have faced ongoing harassment, including misrepresentations of them and their work in the public sphere. They have had employers or funders contacted, and even faced threats of rape or death.

Though it is simple for those who do not engage online to suggest that the attacks some people face are simply one-offs by users who will get bored and move on, all it takes is a single person with an agenda and/or their network to draw out that harassment over an extended period of time. In my recent article in Asia Pacific Journal: Japan Focus,1 I highlighted one such example of instigation and amplification.

The image above shows data I collected on a single harasser with a very large following on Twitter. This person spent over 8 months tweeting at or about me and the five scholars who fact-checked a controversial and ethically dubious article on comfort women in WWII, a topic that always draws out trolls on social media.

As seen in dark red and indicated with an arrow, in a single day, this user tweeted as many as 62 times about us. And in this case, these 62 tweets were solely about one of the writers. Imagine your phone buzzing with threats and negative statements about you 5 times an hour for twelve hours straight. That’s 60 times in daylight hours. Once every 12 minutes. And those are just messages directly about you: messages that use your face, refer to you by name, or reply to one of your threads or posts. I could have included more detailed data on replies or indirect references, but the bare minimum was illustrative enough on its own. Whether or not you turn off those notifications, the content is still there, as is the knowledge that it’s happening, and the knowledge that others are seeing it.

As much as we would like to believe that these things pass, sometimes they don’t. Sometimes there are strategic and organized attacks being used by a specific community to discredit and intimidate our friends and colleagues. These are challenges that we must be aware of and prepared for.


Why Focus on Twitter?

There certainly are other social media platforms academic organizations, departments, and individuals use to communicate, namely Facebook and Instagram.2 These platforms can be equally useful for sharing information to broader communities on or off campus. That said, Facebook and Instagram are not really designed for rapid sharing and discoverability in quite the same way as Twitter. Most people who still use Facebook and Instagram, particularly younger individuals, tend to keep their profiles private or only add known friends and family to avoid scams and fake accounts. Compared to more interactive platforms (like Twitter or TikTok) users who want to share or locate content, especially for academic purposes, don't search as often through hashtags or seek out public groups on Facebook or Instagram, which can make these venues a bit more closed-circuit.

In contrast, Twitter more seamlessly integrates sharing features, communities centered on public hashtags, and content searches. There is no need to necessarily "join" a community, as on Facebook, and given that image sharing is not the main focus, as on Instagram, user profiles tend to be public more often. The ability to stumble across or search for users or topics more rapidly, as well as share posts more broadly, has made Twitter an effective platform for high speed communication, though this discoverability also invites harassment just as quickly. As such, the remainder of this article will focus on privacy and safety issues for Twitter.


Spotting Suspicious Accounts

Where can our awareness of bad actors in the Twitterverse begin? Given the very public nature of Twitter and the ease with which content can be shared, encountering suspicious accounts is inevitable. When we get a reply or a follow from such a user, we often ask ourselves: Who IS this person? Are they an automated bot? If they have the blank, default Twitter user picture, are they actually a real person who has only just created their profile, or could they be a fake account? Are they just someone I suspect I just don’t want to have follow me?

The most important thing to remember from the beginning is that whatever the reason might be that you don’t want a person interacting with you, your choice to not engage with them is valid. There is no rule that anyone who follows you is entitled to follow you. This is a decision in your hands.

So what makes someone’s Twitter profile suspicious? What are some common characteristics that may be red flags? Below I categorize some standard questionable patterns.

» String of Random Numbers Person

When you sign up for Twitter your username (also known as a "handle") is automatically generated as a string of random numbers. In order to avoid being mistaken for a bot or an anonymous troll account, the first thing one should do is change this to a handle of your choice. For example, rather than @paular7363949, my username is now @paularcurtis. This change is one way that you can indicate human intervention in an account; if someone is potentially creating hundreds of bots (fake accounts) all at once, it is unlikely they'll go in and change every username. That said, in some cases bots or fake accounts may not have this string of numbers, but it is helpful to know that it is one common pattern. When a random person follows you and they seem to have an inexplicable string of numbers as their handle, it is often a bot and there is no human regularly operating the account.

I should add as a caveat that you WILL find real people (even very famous ones!) who don’t bother to change this string of numbers. Ultimately, it's a personal choice. But if you don’t change this and your profile or content does not make very clear who you are, then you risk being mistaken for a fake or troll account. Generally speaking, it’s also harder to find you and share your content if your handle is a bunch of numbers one cannot readily remember.

» Questionable Follower Ratio Person

Another pattern often seen in conjunction with the String of Random Numbers Person type account is a questionable follower ratio. In the image to the right, an account just created the month this article was written has 325 accounts they are following but 0 followers, making it somewhat suspect. Similarly, if you refer to the image above again, you'll see that the string of numbers user has 409 accounts that they follow but only 27 accounts that follow them.

At a glance, such an uneven ratio suggests an account is a bot programmed to follow a bunch of other accounts. More often than not, they'll be following politicians, news companies, or people with specific keywords in their bios. And because they typically follow these big name users and/or people at random while themselves having little substantive content on their feed (because, of course, they might not be real), few if any people follow them back.

» Just Joined/New Profile Pic Person

Similar to the questionable follower ratio indicating suspicious or bot accounts, the Just Joined/New Profile Pic Person is very common as well. The account that follows you may will have been created that month or in the last couple of months. The screencaps I took below were taken in December 2021 and June 2021 respectively, so when they followed me they were brand new fake accounts. These recently created accounts will sometimes have only one post, and may be something about just now joining the platform. Using the hashtag #NewProfilePic is one seen often enough to be a dead giveaway for a bot account.


Looking at these two profiles above, you’ll note that the one on the left adheres to our "string of random numbers" pattern for the username, while the one on the right does not. However, they both have almost identical photo poses and the only tweet they’ve made since being created is #NewProfilePic. I have also seen instances in which the first picture posted to the account was something stock (like a bunch of flowers or a beach) with the #NewProfilePic tag. Some of the accounts may also have "#NewProfilePic" in another language, or have #NewProfilePic as their first post and then a bunch of retweets afterwards. It's not clear to me what these (or any other) bots accomplish, but after 20 of them have tried to follow you you begin to see these patterns. It is also worth noting that many of these fake accounts disproportionately use images of people of color.

» The Humble Gentleman (Who Loves Life)

The Humble Gentleman (Who Loves Life) also appears in combination with many of these other characteristics. In my experience these bot accounts are frequently pretending to be men. The user bio makes claims that the person is humble, honest, happy, fun-loving, has a great personality, or sometimes is into God (or his country).

There are a lot of variations, but such accounts are rarely very active. They could include a bunch of random photos that seem to have been scraped from stock scenes of pretty landscapes or to have been stolen from someone’s personal account or online profile. On occasion these Humble Gentlemen feature a series of selfies to make it seem that the account is a real person (see an example of this in the "Person Only Following Women" archetype below).

» The Military Person (Who Loves God or Wants a Relationship)

The Military Person is very similar to the Humble Gentleman, though may not always be a man. These accounts can have normal photographs of a person or be an official-looking picture of a person in military uniform. Still, on occasion, the image won't seem to have any connection to the military at all. Their bios might identify them as "US Navy, Loves Our Country" or include some kind of phrase about how they are "looking for love" or "looking for a serious relationship."

In some cases, the images or names of military persons have been lifted directly from Wikipedia or another website and used to create the bots. Here is one example where searching a public figure's name revealed over a dozen fake accounts:

The gif above shows multiple profiles created between 2009 and 2022 in the name of Leigh Ann Hester, a United States Army National Guard soldier. In addition to variations on auto-generated handles (typically with a string of random numbers attached), we see some user bios that echo our "Humble Gentleman" theme ("A honest, sincere a lovely person") and the "Wants a Relationship" trope ("am a good girl and love" "LOVER"). Many accounts also have the questionable follower ratio. The images, which are, indeed, Hester, appear to have been scraped from Wikipedia or various news outlets.

» The Trustworthy Doctor

In addition to military personnel, we also find that fake accounts regularly claim to be doctors or surgeons. I'm not sure exactly why, but my guess would be that they are trying to play to common perceptions that military figures and doctors are trustworthy people. The two examples below effectively illustrate a combination of our above mentioned characteristics in both English (left) and in Japanese (right).

On the left, we have #NewProfilePic for an account using professional-looking photos of a man in scrubs, where the doctor is "Looking For a Relationship." His follower ratio is a bit suspect. On the right, a (purportedly) Japanese woman wearing a medical mask and scrubs identifies as "Sakura-san," though their handle includes the name "Anthony" followed by a string of random numbers. In Japanese, they give a greeting and claim to be an orthopedic surgeon working in Iraq. This user also has a questionable follower ratio and their only post is a #NewProfilePic tweet. With so many red flags in a single profile, this fake account is easy to spot.

» The Person Only Following Women (or Another Group)

Similar to how many bots will only follow politicians, news outlets, or other notable peoples or organizations, I've also found that many fake accounts have been programmed to strategically follow women and/or a particular field of interest.

To the left you can see a standard Humble Gentleman Account that began following me in December 2021 (a recently created account at the time). The bio insists "God is the only way to success" and the feed is full of seemingly random professional photos of the person whose image is used. Some kind of strange site that I dare not click on is listed in the profile as well. Although there is not a questionable follower ratio, when we examine the accounts the user is following, we find that it is almost exclusively women in Asian Studies and/or employed in Asia-related media. This suggests to me the bot targets a specific kind of account, which can be a warning sign when paired with the other dubious profile characteristics.

» The Person Who Only Retweets, Tweets Excessively, or Only Tweets Stock Photos

You’ll also find that somewhat suspicious accounts might only retweet and/or retweet excessively. Sometimes it’s entirely stock photos with little textual content, sometimes it's all retweets that include a specific hashtag or keyword, and sometimes it might be thematic, focused on something like soccer, politicians, or hot button issues in the news. To the right, you can see a classic #NewProfilePic post along with a series of stock photos of sweets and the Netherlands.

It can often be tricky to figure out if the account is a bot or a real person who prefers to lurk and not produce their own content, given that some people choose only to retweet materials from others, rather than tweet themselves. To figure this out, you might give their last dozen or so tweets or likes a skim (or their "following" list) to get a sense of who they are and if there is a real person behind the account.

» The Celebrity or Famous Person

Every now and then I get followed by Keanu Reeves. Sadly, not the real Keanu Reeves, but a bot or fake account using his photo. Much like our military people whose info and images have been scraped from the web, you may find that some accounts use the likenesses of famous celebrities or other public figures (even if they may not be recognizable as famous to you). Below I provide an example of an account pretending to be a notable person that also incorporates many of our red flags.

At a glance, Warren Steve does not have random numbers at the end of his handle, so that might seem promising. Looking at the bio, which is in Japanese, he notes liking food, people, Japanese temples, and that he generally admires all that is Japan. He could just be a guy into Japanese culture. His join date, December 2021, is a month before he began following me, which gives me pause. His following/follower ratio, 612 to 52, is also a red flag. If we glance at his media-based posts, seen in the top right of the gif, his feed consists mostly of stock photos of nature and Japan (Tweets Only Stock Photos account!). The stock photos also seem a little at odds with his serious businessman demeanor. So how can we search to find out who this guy is?

One way to check if someone is using a stolen profile image is to use reverse image search. Google recently incorporated this feature in its browsers as a right click option known as "Google Lens". As you can see above, you click on someone's profile picture and bring up the image, then right click it and select "Search image with Google Lens." You may have to adjust the search frame. For this profile photo, using Google Lens brings up a reverse image search that identifies our Warren Steve as the former Serbian Defense Minister Dragan Šutanovac, whose photos have been swiped for the fake account. Fishy! This is a user I would then block or remove.

How Can I Show I'm Not a Bot or a Troll?

Now that you know what many of these suspicious accounts look like, it's possible to avoid being mistaken for one. Though not every fake profile will use these patterns, a great deal of them do. The most important steps you can take to identify yourself as a real person are:

1) Change your username/handle as soon as you sign up.

2) Use your real name or something close to it that would make you identifiable to friends and colleagues.

3) Fill out your user bio with some kind of identifying information about yourself, your research, or whatever else you feel comfortable and safe sharing.

4) If you think you might be mistaken for a bot or a troll, be specific about the purpose of the account. If you plan on being a lurker account that only retweets, perhaps just say that in the bio! Maybe even identify what people will find in your feed.

These first steps will help people to recognize you as a real person BEFORE you begin to follow their accounts and reduce the likelihood of being mistakenly blocked. Of course, these are only surface-level characteristics of an account. In the next section I will address different types of negative activity by many accounts that are actually real people and how to recognize antagonistic behavior by bad faith actors on social media.

Common Antagonistic Tactics

Imagine you have said something or shared something that gets you on the radar of social media trolls. A person or (potentially large) community has decided they are going to begin interacting with your posts or share your content with ill-intentions. When this happens it’s useful to understand some of the common tactics that we see from antagonizers and to be able to recognize them as statements or actions in bad faith.

In an ideal world, people everywhere are logical, rational individuals. However, online, particularly in places where it’s easy to not read carefully and quickly hit the send button, unfounded hostility can come at you fast. The rhetoric and strategies used by extremist online communities are common across regions and topics. One of the most useful guides I've found to these trends in misinformation and disinformation is climate change researcher John Cook's FLICC taxonomy of science denial. His detailed infographic below lays out the core categories of Fake Experts, Logical Fallacies, Impossible Expectations, Cherry Picking, and Conspiracy Theories, as well as their many subcategories.

Cook's site has some fantastic explainers and examples of each of these many strategies, and I draw on his definitions for my list below. Here I'll briefly focus on the main five in the context of Twitter harassment I have experienced from Japanese ultranationalists on the comfort women issue. In this context, trolls often focus on historical denialism of wartime atrocities, attempting to sow doubt on the testimony of, and evidence pertaining to, survivors of wartime sexual violence.

  • Fake Experts: Presenting an unqualified person or work as a credible source.
  • More often than not, ultranationalist trolls who follow comfort women hashtags and news will share what they consider to be "authoritative" sources, which are often dubious right wing blogs of unclear authorship or historical materials that have been selectively excerpted to "prove" their claims or disprove other forms of evidence.

  • Logical Fallacies: Presenting a conclusion that does not logically follow the argument or topic at hand.
  • When their views are challenged, the trolls will typically jump to a logical extreme, such as, "If you are claiming the Japanese military committed a war crime in WWII, then you must be anti-Japanese and racist." In a historical context, this might be "If women enslaved for sexual purposes made money, then it was not coercive or enslavement."

  • Impossible Expectations: Demanding unrealistic standards of certainty to prove or disprove a point.

    This is another common fallacy used to assert that inconvenient truths cannot be valid if a certain threshold is not met. For example, claiming that oral testimonies are entirely unreliable, or that we would need "credible" government records (though what those might be is unclear) if one were to ever prove that the Japanese military committed any wrongdoing.
  • Cherry Picking: Selecting or acknowledging only data that supports your evidence while ignoring others.

    This technique of making exclusionary claims is quite common among historical denialists of comfort women history. One document often tweeted as "proof" that comfort women were not coerced, abused, or reported through dubious methods is POW Interrogation Report No. 49 (1944), which states that comfort women were "nothing more than a prostitute or camp follower." However, in the very next paragraph the report also states that the nature of the work women were recruited for was "not specified" and explictly undertaken "on the basis of... false representations"—a pretext that invalidates arguments that these women were participating voluntarily with full knowledge of the circumstances when or if they were not directly coerced. This section of the report is often omitted or ignored when ultranationalists share the Report 49 image.
  • Conspiracy Theories: Suggesting there is some kind of secret plan or nefarious scheme that is hiding the truth of a matter.

    Among many other extreme claims about government coverups and propaganda, I and many of my colleagues faced a variety of conspiratorial claims for disagreeing with right wing ultranationalists on social media. In addition to being labeled pro-CCP spies or communists we were accused of being funded by China, North Korea, or South Korea to promote anti-Japanese positions and scholarship.

As Cook's graphic above demonstrates, there are many variations and subcategories that put a finer point on these tactics, but it's useful to become familiar with the broader forms to more readily recognize them when they are directed at you. For more on these conspiracies and the real-life impacts they can have in the context of comfort women and Japanese online communities, see my APJJF article.


Do We (Not) Respond?

Folks who have been plagued with a wide variety of nonsense, hostility, and maybe even threats on Twitter always want to know: Should you respond? Why might one choose to reply (or not)? Should you just remove the offender and move on?

It’s important to remember that an online troll is rarely actually trying to engage in a meaningful way with the person they are harassing. Which is not to say that it never happens, but by and large, these users have entered into a dispute or engagement with a specific objective such as:

  • Disinformation/Misinformation
  • Posturing
  • Entertainment
  • Bad Faith Arguments & Denials
  • Fodder for more Harassment or Validation

They might be trying to spread disinformation or misinformation. They might be posturing, attempting to be visible so that they can present themselves or their views in a way that gains recognition from others. They could just be stirring the pot because they think it's fun. In such cases, it is very likely that they are putting forth bad faith arguments on purpose with a goal in mind. For example, they could be promoting historical denialist stances or attempting to invalidate views that are not in keeping with their own. Here you often see many of those logical fallacies described above in play. Their engagement may also simply be a way for them to draw attention to you and/or encourage others to pile on and target you. If you give them more fuel they can feed the fire of their manufactured outrage. The reason they are interacting with you could be one or all of these.

But the key here is: Twitter harassers and trolls are seldom governed by a clear or stated logic or reason.

This type of activity can be very difficult for academics to grapple with in particular because part of our job is to address inaccuracies (and injustices) when we see them; to maintain and rely on credible sources of information with academic integrity and to identify those that are not; to bring to light frustrating and misleading tactics or misstatements; and to support our colleagues who may become targets of hateful attacks. We are trained to educate others. So when harassers appear, our instinct is often to be reasonable with them. But it is crucial to remember that most online harassers are there to hold a line and foment discord, not to learn. It does not matter how many times you provide someone with the truth. If it is not in their interest to believe you or recognize what you’ve told them as valid, they will not do it.

This is not to say that we shouldn't sometimes try, or that there aren’t reasons why we might sometimes respond—particularly if your goal is for others to see what you’ve said and how you’ve weighed in. But be sure that no matter what you do, you protect your time and be aware of the mental and emotional costs of engagement.

Commenting Indirectly


It may be that you’d like to call out someone for bad behavior or a questionable opinions in the public sphere for things they’ve tweeted, but then you’re faced with an ethical question—am I in fact doing more damage by directing internet traffic to their account? In addition, if you quote-tweet someone and your account is not set to private, they will immediately receive a notification and see what you’ve said, which could stir up conflict. There are two main passive ways in which people take caution not to put a target on themselves when they comment on something, even if these methods are not perfect.

» Screencapping to Comment

One method people use is taking a screencap of someone’s tweet rather than linking to their content directly. This has three relevant effects:

1) it prevents a notification being produced that you've commented on something

2) it can serve as a record of a tweet in the event that someone deletes it

3) it lessens the likelihood that someone with a significant number of followers or the ability to instigate significant harassment will see what you've posted right away

In the example below we can see that directly quoting someone at many different intersections of hot button issues would likely bring out a whole host of trolls, so the tweeter chose not to direct quote them.

Screencapping does not entirely prevent people from finding that you’ve said something about them, but it can be a good way to be a little bit more cautious than direct interaction, which might be a concern if you are, say, a in a vulnerable career position, tweeting about a particular controversial topic, or criticizing someone with a huge number of followers.

» Subtweeting

Another option to more passively comment on something is “subtweeting." Subtweeting is referring to someone in your tweet or comment while not using any directly identifying information about them. Sometimes people don’t outright say “I am subtweeting” when they write, but other times they do. This might be by using a hashtag (as seen below) or a clear statement of “Yes, this IS a subtweet.” At this point people may become curious if they don’t know who you’re talking about and go seek out whatever drama is afoot on Twitter.

The risks associated with drawing more attention to someone or their content is also a reason to tag people with care when you make a post or reply on a thread. By including someone on a conversation you may also be unintentionally putting them in danger of becoming a target if it’s in connection to the wrong person or on the wrong discussion. The same can also be said of using hashtags, which some trolls monitor in order to harass people speaking out about specific subjects and to generate conflict. It’s important to always be cognizant of how you amplify people and information, even if your intentions are good.


Gradations of Disengagement

Let’s say you have attracted unwanted attention from an individual or even a very large group of twitter harassers. Or, alternatively, you expect to in the near future; maybe you have a publication coming out soon, were interviewed on television, or are showing a screening of something controversial that will inevitably attract trouble. When your work is public-facing there are many reasons you might become more visible on social media.

Depending on how cautious you want to be and how comfortable you are engaging with questionable characters, there are various steps you can take to protect yourself or remove yourself from harm's way that range from mild to extreme, depending on what you want or need. These actions may entail controlling what you see on social media or limiting what others see of your social media. They are, in order of severity, as follows:

Control What You See
  • muting

Preemptively Control What You See
  • restrict replies & retweets

Control What They See
  • soft block
  • hard block

Preemptively Control What They See
  • mass block
  • private account

Note: My examples are based on managing a personal account. For those who run official accounts for organizations, academic institutions, or highly public programs, you may have to be careful with the choices you make. In some cases this may extend to personal accounts. As we have seen in recent years, depending on the work that you do there may political considerations to certain forms of disengagement. Who you block and for what reason sends a message and in extreme cases it may even be against the law to block someone from your official account.

For my animated examples below Tristan Grunow (Nagoya University) kindly allowed me to use his profile to demo Twitter features, but I promise no Twitterati were harmed, muted, removed, or blocked in the making of these examples. For the sake of convenience, the explanations and images I provide are from browser-based Twitter use, though navigating to various features that allow disengagement are similar on the phone app.

» Muting

Muting someone is when you remove a specific person's tweets from your timeline without unfollowing or blocking that account. When muted, the user’s posts are no longer visible to you (though they can still see you). As shown on the right, you can locate the mute option by clicking on the three dots icon in the upper right-hand corner of a tweet. You'll find you can either "Mute @user" (all of their content) or, if you do not want to receive notifications on a specific thread you interacted with or that you were tagged into, you can hit "Mute this conversation." These dots can also be found on a user's profile page with the same option to mute an entire account.

Muting can be helpful when you don’t necessarily want to stop following someone. Perhaps they are a colleague you don't want to be offended or an organization you're affiliated with but you just aren’t interested in seeing what they post. Maybe they incessantly upload pictures of their car or constantly share graphic news stories. Nothing against that person, but you just don’t want to see it. They won’t know you’ve muted them and you can unmute them at any time. If they’re a troll, by muting them you no longer have to think about them, and they effectively are now screaming into a void because you can’t see their posts and they won’t ever get a reply from you (though, on the downside, others who visit your timeline could still see them).

Muting is an especially helpful feature if you have a post or comment that went viral or is on a viral thread. If it’s annoying you or stressing you out, just mute it.

» Restrict Replies & Retweets

Another way to preemptively control what you see on your feed (as well as how others can interact with it) is to use the "Change who can reply" function. As seen on the right, by navigating to the top right of your tweet and clicking the three dots icon you can select this feature, which will give you three options for reply restrictions: Everyone, People you follow, or Only you. You can also enable this setting when composing a tweet.

This capability is especially helpful if you are posting on a controversial topic that tends to attract trolls or if you're responding to something said by a user with a huge following that could be mobilized against you and flood your replies. Regardless of the settings you choose for your reply restrictions, anyone mentioned can always reply, so tweet with care.

Another option for restricting what you see from an individual user is "Turn off retweets" (you can view this option in the gif below). If someone retweets content constantly that you would rather not see, you can go to their profile, click the three dots icon, and select the first option listed there. This will not retroactively remove their retweets from your feed and for whatever reason cannot be done to all accounts, but it can weed out some content you may not want to see.

» Soft Block

The next level of disengagement is a soft block. A soft block is when you remove someone from following you. On your browser, you can soft block by going to the same user profile options under the three dots icon and selecting “Remove follower” (see in the gif to the left). As of writing this article, the remove follower option does not appear on the phone app, but you can soft block someone the same way by blocking the user and then immediately unblocking them. This will remove them from following your account.

Soft blocking is another gentle way to show someone the exit if you don’t want them directly interacting with your content. They will not receive a notification that they are no longer your follower. If they’re a bot, no harm done. If they’re a real person, they might not notice for a while that they are no longer following you, as your posts will no longer show up directly in their feed.

That said, if Twitter's algorithm shows your tweets to them because someone else liked it, they may notice they are not longer seeing your content in the capacity of a follower. If they do notice, they can always follow you again, at which point you can choose to remove them again or block them entirely.

» Hard Block

A hard block is the next step. This prevents someone from viewing your account and viewing your tweets when they appear in someone else’s timeline or thread. Again, users are not notified, but if they visit your profile, they will see that you have blocked them.

In the screencaps below you can see an example of being blocked by someone (left) and blocking someone (right). As the image on the left shows, being blocked eliminates the ability to see that person's tweets from your account. On the right, note that even if you have blocked someone, if it is not mutual, you can still choose to see their tweets.

It's also important to note that there are ways around a block. Blocking is account-specific, so some people simply create alternate accounts and read the content of the person who blocked them using the new account or just view that person's account on Twitter without being logged in. So it’s NOT a perfect system. But most trolls are more concerned with the immediacy of being able to interact with someone and are not so determined to see your tweets that they’re regularly taking those more extreme measures. It does happen, but hard blocks mostly weed out the riff-raff.

When I and several other colleagues came under fire by Japanese right wingers and particularly virulent communities of trolls were bothering us non-stop, we found that blocking them was very effective (and also pissed them off). Many of them were incensed that we "refused to listen" to them, posting tweet after tweet saying "She blocked me!" “Me too!!” “I’m blocked too!!” “I can’t believe it!!” “I didn’t even interact with her!!” Many of them were preemptively blocked so that they never had the chance to visit my timeline and start harassing me.

Here it's important to remember: no one is entitled to your content or your time.

And contrary to what many of our trolls believe, blocking someone is not obstructing their freedom of speech. They’re free to say whatever they want. But you don't have to hear or see it if you don't want to. I don’t feel bad about it, and you shouldn’t either.

» Mass Block

For those of us who experience large-scale online harassment from particularly determined individuals or communities, the ability to mass block a large number of bad actors at once is essential. This is something you can do preemptively if you know there are certain bodies of users that you will never want to have looking at or reproducing your content or if you find yourself targeted and need to respond after the fact. Here I will briefly introduce the functionality of three tools I have found particularly useful.

Twitter Block Chain

Twitter Block Chain is a Chrome extension that allows you to block all of the followers of a specific account. For me and my colleagues, the account I mentioned at the top of this article had about 12,000 followers at the peak of our harassment, and the user was tweeting about us nearly every day, encouraging her followers to also do the same. In order to slow down their ability to access my content and interact directly with my account, I simply navigated to her followers list, ran the Twitter Block Chain extension from my browser, and watched as it blocked them by the thousands. See an example of this process in the gif to the right.

Twitter Block Chain can get a little buggy for really big follower lists and return errors (I suspect this is connected to pinging the Twitter API), but if you clear your browser cache, log in again, and run the extension once more on the same follower list it’s usually effective.

Block Party

Block Party is a browser-based tool that let’s you tackle potential problems on Twitter ahead of time. It essentially acts as a filter for certain kinds of users, content, or interactions. You can set it to prevent direct messages or follows from specific types of accounts, review the content it blocked, and more. The gif below is a demonstration of Block Party's functionality from their website.

Block Party also enables you to create custom block lists from specific tweets. For example, if a person has tweeted something incredibly offensive or has targeted you in a tweet, you can build a list of users from anyone who has retweeted or liked that post and mass block them in one action.

One really innovative feature also worth mentioning is that you can also give a trusted friend permission to access your Block Party account. So if you’ve been targeted and do not feel emotionally or mentally capable of dealing with the negative interactions you may be facing, you can give someone else the ability to can go in and assist with the filtered content on your behalf.

Douche Block

The Douche Block app (which I have also included an advertising gif of from their website) allows you to automatically block accounts based on specific keywords that appear in their bio or in their handle. This option can allow you to strategically avoid engagement from users with certain kinds of interests or who promote specific things.

Douche Block is another proactive way to ensure you are not interacting with or being seen by certain individuals, particularly if it's likely that they would bring negative attention to you or your content. However, one should be careful when using this kind of preemptive tactic because an app cannot tell when someone's content is created in jest or ironically.

Of course, none of these applications are perfect. When you are mass blocking, there is always a risk that someone might get caught up in your block who you don’t actually want to block, which can lead to hurt feelings or offense, even from someone who you know. In some ways, the imperfections of these tools provide a reason to think twice before hate-following an account or using one's personal account to follow dubious ones that might be subjects of one's research. There are always risks involved in linking yourself to or distancing yourself from others online.

Still, even if there is a time investment in choosing one or more of these tools to curate your Twitter feed, they can provide peace of mind and keep you from staying up until all hours of the night fighting with strangers on the internet. Unless that is something you enjoy doing.

» Private Account

Finally, the most extreme option you have is to completely lock down your account and make it private. Note the little lock symbol next to the username below. On the left, you can see what a private account looks like for someone who is a follower (tweets are visible) and on the right, the account displays as locked down and inaccessible to a non-follower.

Having a private account will prevent anyone who does not follow you from seeing your tweets, even if they are looking from another account. On the positive side, no more harassment from others unless that person already follows you . But on the negative side, having a locked account ultimately limits your interaction with and discoverability to others (which may be problematic if you're hoping to use Twitter to network or promote professional content). One compromise is that sometimes locking down your account for a short period od time can help slow down a (temporary) swell of harassment. In the past I've had colleagues do this to stop others from retweeting their content for dubious purposes until those individuals lost interest.

Whatever you do to help protect yourself, your content, and your mental well-being, whether it's a simple mute or a mass block, the decision is yours to make.


Doxxing

If your (successful) solution to someone bothering you is just a simple block or an account lockdown then you are one of the lucky ones. At its extremes, social media-based harassment can range from a stranger's stupid comment that is easily ignored to real world danger that can threaten careers and lives. It is therefore worth briefly discussing one of the most extreme forms of digital harassment, doxxing, which is a form of attack intended to spill over from online spaces into analog ones.

If you are not already familiar, doxxing is a form of online harassment in which someone publicly reveals previously private personal information about an individual or organization. This could be your real name, your home address, where you work, or something else. In contrast to complaints you might read online, doxxing is not simply calling someone out who is otherwise already visible, tagging them in a post, or otherwise calling attention to them.

At its most mild, doxxing could be something like sharing your private email address to a community of trolls. At its most severe, it can include using private information like a home or work address for a targeted attack like “swatting." Swatting refers to using someone’s location to call in a false police report in order to put an individual, their family, or others around them in a violent and potentially deadly situation under false pretenses. One of the most frightening parts of this kind of threat is that our personal information is actually not that difficult to find. By searching your name and other identifying information it is fairly easy to locate current or past addresses, phone numbers, relatives, and more. This information may also be shared through sites who profit from its redistribution known as data brokers.

Data brokers (also known as information brokers, information resellers, data aggregators, or information solutions providers) are individuals or companies that specialize in collecting data, mostly from public records but sometimes sourced privately, and selling or licensing that information to third parties. When you click user agreements for various apps and websites you may not even be aware that you are giving certain organizations permission to share or sell your information. In many cases, once data brokers or third party websites have your information, they require that you individually seek out and request its removal from their platform. This can amount to hours or weeks of your life invested in hunting down and (sometimes fruitlessly) asking for your personal information to be taken down.

It's no surprise then that services have popped up that do this scrubbing of information for you. This is its own industry. These kinds of services do cost money, but so far as I have seen not amounts that are not worth your peace of mind. Some of the services I've found through online searches or recommendations are:

  • DeleteMe (for deleting public records)
  • OneRep (for deleting public records)
  • AccountKiller (for deleting your profiles/information from sites that make it difficult)

The one I am familiar with and use myself is DeleteMe, so this is what I will show you some examples from so you can get a better idea of what these types of services do. I believe Delete Me cost me about $100-125 a year, and the links I've provided include my referral, in case this particular service interests you.

Once your sign up for Delete Me you provide the security service with your information, such as full name, any variations or name changes, current and past addresses, family members, current and former email addresses, places of work, etc. It might seem a little counterintuitive to provide an online company with your information, but this is how they locate your data on other websites.

DeleteMe then reviews known data broker sites for whatever you have provided them. When your information is located, they submit removal requests on your behalf. DeleteMe performs record searches regularly and provides you with quarterly reports.

The screenshot to the right is part of my DeleteMe dashboard showing the number of records reviewed and number of records removed since I signed up in September of 2021.

Under Records Reviewed, you can see that the service searched ~2,500 records in the first quarter, then about 2,000 more in the second quarter, and roughly 225 more in the third quarter. Given that data broker sites are not necessarily exponentially growing, it is no surprise to see this drop off.

Looking at Records Removed, the first pass yielded 35 records, then an additional 9 and 3 respectively. Over 9 months, DeleteMe has reviewed 40 data brokers, reviewed 11,178 records and found 36 brokers with some form of my personal information.

The DeleteMe dashboard also provides a colorful circle graph that breaks down the kind of information they located. As you can see to the left, 49% of the records revealed who my relatives are, 14% contained current or past addresses, 14% included my name (likely my full name, or it would not have been identified), 12% had information on whether or not I had a spouse, and 11% had my legal relationship status. In the past it has also given a percentage of records containing my age (meaning a site that had my birth year, at the very least), though I'm not sure what the threshold is for a percentage of records to appear on this chart since that is not visible on the latest dashboard graphic.

When you receive quarterly reports DeleteMe includes what broker websites had your information, whether the data is in the process of being taken down or has already been removed, and how long it took from the request they made to the actual removal of your records. They also tell you which site had the most information and, helpfully, approximately how much time their service saved you (which I'm sure is as much a marketing ploy as anything else, but I also imagine isn't too far off from the truth!).

There is no guarantee that services like this are 100% efficient, but they do make it more difficult to acquire sensitive information within a few clicks. People can receive terrifying threats in virtual spaces, but the vast majority of harassers are not taking the time to go to some of the extremes I described. Nevertheless, it’s important to educate ourselves about how we, or those we care about, can be prepared for all potentialities. Knowing what data brokers are and that tools like DeleteMe or other services exist in order to help manage online information (the spread of which may be beyond our control) is an helpful place to start.


Putting Yourself First

Now that you’ve learned all of the horrible ways that things can go wrong and people are terrible, let me reiterate an important reminder in all of this: You Must Put Yourself First.

Many of us perceive social media engagement as an all-encompassing thing. Something that sucks up all our time, that we must be constantly pushing content to, and that we have to persistently keep up with. But the right level of engagement for you is the right level of engagement. You control the platform, the platform does not control you.

As this article has shown, there are ways for you to shape your experience to make it a healthier and safer space for you. Personally, I don’t follow any politicians or actors and extremely few news sites. The 24-hour clickbait news cycle frustrates and depresses me. And when I just can’t hear another hot take on COVID or pandemic life, I will mute the words “COVID” “pandemic” “virus” and “antivax” for 7 days, just to give myself a mental break. When I see someone I think is a weirdo or a bot, I remove or block them.

If someone is harassing you or saying inappropriate things on your feed, get rid of them. You don’t owe someone yelling at you on the internet your time any more than you might owe an unhinged stranger a conversation if they approach you on the street.

You also don’t have to be on social media all the time. If you want to not use it for a week, don’t. If you only want it on your computer and not your phone to resist the urge to check it frequently, take the app off. Want to only use your social media account on the weekend? Once a day? While at conferences? Sure! Who says you have to do otherwise?

The important thing is that your mental wellness and well-being come first. Twitter’s feelings aren’t going to be hurt. You are the priority and the platform is your tool.


Not All of Social Media is a Dumpster Fire

Having thoroughly scared you with all the possibilities of how things could go wrong, I think it’s also necessary to highlight that not all of social media is a dumpster fire. Seriously. For me, using Twitter has been an invaluable way to network with communities of scholars (and others) in Asian Studies, Medieval Studies, Digital Humanities, and more. It’s kept me up to date with the latest publications and research and shown me the fascinating tools that people use in their work and their teaching. Because I’ve been present, visible, and active, people who might otherwise not have known about my work have reached out to invite me to conferences, to write for edited volumes and magazines, or to teach classes or workshops.

Because I share job data on East Asian Studies weekly, I’ve had non-profits related to Asian Studies reach out and ask me to speak to their representatives about my work. When I’ve gotten publicly cranky about errors in news articles, I’ve been asked by chief editors to speak with their writers and correct historical mistakes they've made. All of this has been a great way to gain experience in public engagement, forms of digital literacy, and where they intersect.

Crucially, being present on social media has allowed me to learn from my colleagues who are facing similar (and very different) challenges in their academic work and careers. It has enabled me to be in solidarity with them and also to be a strategic advocate in public spaces, whether it’s for students, faculty, departments, or the field at large. Being online and engaged comes with great visibility. As we’ve seen, there are dangers to that, but we should not forget the possibilities as well. This article may reflect the darker corners of Twitter, but on the whole, I’ve had wonderful experiences using social media as a part of my professional development and I encourage people to consider how it may or may not be useful to them.

If you’re wondering how to effectively use Twitter while giving your colleagues proper credit, creating content, and expanding your networks, I have written "A Guide to Best Twitter Practices for Academics (and Everyone Else)," which outlines some good strategies. On the topic of digital harassment and its impact on scholars of Asian Studies, you can also refer to a bibliography of suggested readings from our first Academics Online session. Be safe, and happy tweeting!3


If you found this page or any other projects and public-facing writing on my site useful, please consider regularly supporting me via Patreon Patreon . Writing and coding this information takes hours (and lots of hair pulling over broken code!). There is a lot of invisible labor that goes into it, which I do in my spare time. Support from the community I do this for means a lot to me and helps keep this site running. 🙂 Thanks for reading!

Last updated 2022.05.20

Images from iconmonstr and Irasutoya.
  1. An earlier version of this article also appeared in Critical Asian Studies, October 12, 2021.

  2. It is also worth noting that Facebook purchased Instagram in 2012.

  3. A special thank you to Tristan Grunow for editorial assistance on this page!