liz writes stuff down
26Oct/16Off

Reasons to kick Peter Thiel off Facebook’s board

There's a push to remove Peter Thiel from Facebook's board, and Mark Zuckerberg doesn't care about the threat he poses. Many of the arguments are centered around diversity, which is a tenet Facebook says it deeply values.

The ways Thiel fails to value diversity matter: his beliefs are not just a matter of intellectual debate but a very real threat to my safety. They are particularly transparent during the 2016 election season. I don't support the bigoted, sexist candidate that is Donald Trump like Thiel openly and aggressively does for a host of reasons. One of the most important is that I feel directly threatened by having someone who freely admits to committing sexual assault hold the highest office in my country - I feel especially, intimately endangered as a survivor of sexual assault myself. That's just one of the numerous reasons a Trump presidency would devastating be for women, people of color, LGBTQIA+, and other oppressed groups.

However, Thiel's harmful views on diversity and elections reach much farther than his open, aggressive support of Donald Trump in this election. Thiel believes that women like me should be stripped of their right to vote - not just because of the diversity concern regarding how he clearly doesn't care about women, but because women happen to disagree with his political views and actually hold the power to prevent the outcome he desires. The voter suppression he espouses directly eliminates free speech, something Facebook claims to be incredibly important.

Kick Peter Thiel off Facebook's board. Kick him off because he discourages diversity. Kick him off because people like me don't feel safe with him on it. Kick him off because he doesn't believe in free speech.

22Sep/160

Triggering videos, thoughtful content warnings, and responsible feature release policies

Content warning: police murder of black people

We need to talk about potentially triggering videos and social media.

Social media is nearly unavoidable. There's a lot of upsides to using it, such as keeping up with family and friends you can't see frequently and reaching out to your customer base in more personalized ways, but it's also evolving very, very fast. One day we're posting our latest relationship update; the next we're getting news as soon as it breaks. It empowers marginalized voices to tell their stories - stories that often go untold.

One second we were uploading photographs; the next gifs and videos - videos of our cats, then videos of police shooting black people. Maybe people are posting them to draw attention to issues the mainsteam media doesn't see fit to focus on. Maybe police shootings are being posted as videos because people don't believe black victims didn't pose a threat. I don't personally share these videos, so I can only guess at the motivations.

If we listen to black voices, we hear them pleading to keep videos depicting police shootings of minorities out of their view.

Black lives matter. Properly respecting black lives involves understanding that videos depicting a death of a human can be very triggering and respecting viewers by allowing them the chance to opt out of seeing them.

We need to use thoughtful content/trigger warnings.

Content warnings allow people to engage with content how and when they feel comfortable doing so. They are not censorship.

Placing content warnings for videos is a lot less straightforward than placing content warnings for written pieces without even photographs like this one. A written content warning before a video in a social media post is a good start. They work well for posts that link to articles with videos in them when those videos don't sneak into the previews social media like to embed.

The reader is able to see this tweet without having to see a potentially triggering graphic video.

We need tools that help us place content warnings specifically for video formats.

If someone comes across an article or social media post with a video directly in it, text-based content warnings aren't enough.

Preview frames might include a triggering image. We can replace the default frame selected from a video with one that contains only the text of relevant content warnings. We need tools that make this process easy to do. Preferably, social media would include these tools in their upload interfaces themselves. Their inclusion would provide not just ease of use but also a convenient reminder to use them.

Also, the video may start playing anyway - maybe it autoplayed, maybe the reader accidentally clicked on it. Tools to supply text-only frames containing the relevant content warnings for the first ten seconds of a video would give someone the opportunity to pause or scroll past it before triggering content is forced upon them. This would provide a buffer even when video autoplay is turned on.

We need to release new features responsibly.

When Facebook and Twitter released their video autoplay features, they enabled it by default. Their decisions forced users to unnecessarily engage with content they didn't desire to see.[1] We should not turn on new features like video autoplay by default.

Facebook does allow users to disable autoplay for videos, but that still doesn't make it okay to turn it on by default. Twitter allows you to turn off autoplay for some videos[2], but you're stuck with autoplay while you scroll through a Moment, regardless of your video autoplay settings. It is absolutely irrepsonsible of Twitter to force this feature on you in any situation.

Content warnings, tools, and responsible feature release policies will never be a complete solution, but maybe these tools and feature releases would allow for us to benefit from having triggering videos available without causing as much unnecessary harm.

[1] It's also a huge mobile data hog.
[2] This appears to now be in the "Data" section of settings on iOS, and I am told it is a similar process for Android.