Discover more from Terms of Service with Chris Martin
Twitter's Algorithm Self-Examination [Content Made Simple]
Issue #217: Also, a fascinating new Discord, Nextdoor's unique strategy, and more.
Twitter is starting a new initiative, Responsible Machine Learning, to assess any “unintentional harms” caused by its algorithms. A team of engineers, researchers, and data scientists across the company will study how Twitter’s use of machine learning can lead to algorithmic biases that negatively impact users.
One of the first tasks is an assessment of racial and gender bias in Twitter’s image cropping algorithm. Twitter users have pointed out that its auto-cropped photo previews seem to favor white faces over Black faces. Last month, the company began testing displaying full images rather than cropped previews.
I am always going to be in favor of platforms doing some self-evaluation in order to improve the experience of users. I’m glad Twitter is working on this.
ON THE POD
No pod this week.
HITTING THE LINKS
"Assuming press volume continues to decline, we're not planning additional statements on this issue," the email reads. "Longer term, though, we expect more scraping incidents and think it's important to both frame this as a broad industry issue and normalize the fact that this activity happens regularly."
"To do this, the team is proposing a follow-up post in the next several weeks that talks more broadly about our anti-scraping work and provides more transparency around the amount of work we're doing in this area," the message continues. "While this may reflect a significant volume of scraping activity, we hope this will help to normalize the fact that this activity is ongoing and avoid criticism that we aren't being transparent about particular incidents."
This is an interesting strategy! I’m not sure how well it will work, but it’s worth a shot!
In the blog post announcing the anti-racism notification, Nextdoor says that its similar Kindness Reminder has reduced “incivil content” by 30 percent. It’s hard to say whether this new feature will be as effective. The example that Nextdoor shows — someone responds to a Black Lives Matter post by saying “all lives matter,” then decides that they’d actually like to hear the person out after seeing the anti-racism notification — doesn’t really seem to match up with real-life behavior.
This is one of my favorite things I’ve ever been a part of on the internet. I love the idea. It’s been so fun to participate in so far. So many social media nerds in one place, chatting about social media trends and culture 24/7. So interesting!
What excites me so much about Sidechannel is the idea of a space on the internet that isn’t extractive but generative. Twitter is probably the place where I do the most ‘hanging out’ on the internet these days. But like most platforms, it feels so transactional. You are competing for likes and retweets and attention. It is next to impossible to have nuanced conversations and — as I wrote on Tuesday — often discussions meant for one group of people end up in the hands of a different, openly hostile audience. These spaces often feel toxic and exhausting and exploitative because the business model they’re built off of is extractive. This is an attempt at something different — a social internet experience that gives you something in return and doesn’t leave you feeling exhausted and hollow.
THE FUNNY PART
If you like this, you should subscribe to my free newsletter of funny content I find online. It’s called The Funnies. It delivers on Saturday mornings.
You can subscribe to The Funnies here. (It is and will always be free.)