We’ve taken a look back at the “conversation” as it unfolded around the hashtag #SOTU.
On January 30, 2018, we saved 9 maps between 4:55 pm to 6:15 pm PST. This represented 1800 total tweets generated by 659 unique profiles. We then reduced our data set to 182 profiles with no geographic identifiers, and asked if they have anything in common or what pattern might emerge? Here are two snapshots showing how we see the conversation.
We found the “conversation” to be clearly partisan and representative of the Blue/Red political divide. From the hashtag #SOTU we categorized these 182 Twitter profiles as:
- 95 Blue (we noted how close this split mirrors the 2016 popular vote 48.2%)
- 87 Red (46.1% 2016 popular vote)
From each of these groups we further looked at those tweeting 50 plus times per day (their 7 day average) as being an indicator of programmatic bot-like behavior. We’re following this benchmark as one established by the Oxford Internet Institute’s Computational Propaganda team.
- 30 Blue profiles show programmatic bot-like behavior.
- 400+ tweets per day (pictured below) = 2
- 300+ tweets per day = 2
- 200+tweets per day = 7
- 100+ tweets per day = 10
- 50+ tweets per day = 9
- We also note grey-egg profiles = 4
- 47 Red profiles show programmatic bot-like behavior.
- 400+ tweets per day (pictured) = 4
- 300+ tweets per day = 3
- 200+ tweets per day = 11
- 100+ tweets per day = 9
- 50+ tweets per day = 20
- We also note grey-egg profiles = 7
We also took note of the tweets tempo/volume from the most prolific profile from each group. From their last 200 tweets we noted the profile each had most mentioned the most.
For BlueBot #1 — 13 Retweets (in less than an hour) out of last 200 were for.
RedBot #1– 34 retweets (note 14 retweets in a 12 minute span)out of last 200 were for.
On a small scale, this highlights that programmatic bot-like behavior is a full spectrum political activity. When we start imagining and extrapolating this activity at scale, it becomes more evident how algorithms are being gamed, filter bubbles are becoming impregnable, and online noise pollution is burying the human voice.
Additionally we observed a considerable amount of anonymity connected with both sides of this conversation. There were few profiles displaying real avatars and some of those that are real, proved to be difficult to verify. There was a lack of indicators to suggest these participants are either real or who they claim to be, and none of these 77 bot-like profiles provided links to other sources such as other social media site, blogs, or businesses.
There’s no shortage of objection to the issue of online anonymity. On one hand, it’s a contributing factor to the overall problem as it relates to online harassment and information warfare, but on the other it’s strips away credibility. However, we see privacy is a more overriding consideration. From doxxing, to the risk being disappeared by an authoritarian regime for having a dissenting voice the stripping away of having the choice to mask identity will simply further advance to role of social platforms as mechanisms of state surveillance.
FIND OUT WHY SOCIAL BOTS ARE SO DANGEROUS.
THE COMPLETE 4-VOLUME EBOOK IS AVAILABLE NOW.