That feeling when you think "we should buy a full page in the Times and publish an open letter," and then you do. pic.twitter.com/BQiEawRA6d
— Stewart Butterfield (@stewart) November 2, 2016
Part of their letter reads:
Communication is hard, yet it is the most fundamental thing we do as human beings. We’ve spent tens of thousands of hours talking to customers and adapting Slack to find the grooves that match all those human quirks.
Slack knows it is used in a lot of places, in a lot of different ways. Many users have been requesting the ability to mute or block other users:
@GlennF Do you know if there's a way to mute certain users in Slack?
— Brianna Wu (@Spacekatgal) August 12, 2016
@paulcbetts conrgrats on joining Slack =) Do you know if you guys are planning to allow users to mute each other in channels?
— TheCodeJunkie (@TheCodeJunkie) January 15, 2015
I have found the first huge flaw with Slack, likely inherent it its initial design goal as a work collab tool.
Can't mute/ignore users.
— br¯\_(ツ)_/¯ty (@br_tal_ty) September 15, 2016
Users have offered numerous reasons they might want this feature:
With email, phone, IM, etc., if you can't resolve a situation, you can block someone. With @SlackHQ, you can ... quit your job?
— Geoffrey Thomas (@geofft) August 28, 2016
Another case for @SlackHQ implementing a mute feature: bots that many people find useful, but a small few find distracting.
— Brian by Santana ft (@brianloveswords) September 16, 2016
@SlackHQ not everyone can leave a bad situation immediately for many reasons. you’re enabling abuse by refusing to implement blocking.
— susan ☄️ (@bysusanlin) November 3, 2016
I found those tweets in a couple minutes, and you can easily find more. I'm not sure when Slack first heard users wanted blocking and muting, but they definitely did almost two years ago:
@NJDG Not at the moment, but as we continue to host different sizes and kinds of teams, this may be something we'll add.
— Slack (@SlackHQ) November 8, 2014
Despite hearing this request for two years, Slack's position now is that no one needs blocking and muting features:
@sondy No, you can't block or mute people in Slack. As a tool for teams to work together, that could make it very hard!
— Slack (@SlackHQ) August 22, 2016
That's not attentively listening to users like their ad claims.
There's a push to remove Peter Thiel from Facebook's board, and Mark Zuckerberg doesn't care about the threat he poses. Many of the arguments are centered around diversity, which is a tenet Facebook says it deeply values.
The ways Thiel fails to value diversity matter: his beliefs are not just a matter of intellectual debate but a very real threat to my safety. They are particularly transparent during the 2016 election season. I don't support the bigoted, sexist candidate that is Donald Trump like Thiel openly and aggressively does for a host of reasons. One of the most important is that I feel directly threatened by having someone who freely admits to committing sexual assault hold the highest office in my country - I feel especially, intimately endangered as a survivor of sexual assault myself. That's just one of the numerous reasons a Trump presidency would devastating be for women, people of color, LGBTQIA+, and other oppressed groups.
However, Thiel's harmful views on diversity and elections reach much farther than his open, aggressive support of Donald Trump in this election. Thiel believes that women like me should be stripped of their right to vote - not just because of the diversity concern regarding how he clearly doesn't care about women, but because women happen to disagree with his political views and actually hold the power to prevent the outcome he desires. The voter suppression he espouses directly eliminates free speech, something Facebook claims to be incredibly important.
Kick Peter Thiel off Facebook's board. Kick him off because he discourages diversity. Kick him off because people like me don't feel safe with him on it. Kick him off because he doesn't believe in free speech.
Content warning: police murder of black people
We need to talk about potentially triggering videos and social media.
Social media is nearly unavoidable. There's a lot of upsides to using it, such as keeping up with family and friends you can't see frequently and reaching out to your customer base in more personalized ways, but it's also evolving very, very fast. One day we're posting our latest relationship update; the next we're getting news as soon as it breaks. It empowers marginalized voices to tell their stories - stories that often go untold.
One second we were uploading photographs; the next gifs and videos - videos of our cats, then videos of police shooting black people. Maybe people are posting them to draw attention to issues the mainsteam media doesn't see fit to focus on. Maybe police shootings are being posted as videos because people don't believe black victims didn't pose a threat. I don't personally share these videos, so I can only guess at the motivations.
If we listen to black voices, we hear them pleading to keep videos depicting police shootings of minorities out of their view.
Can y'all not share video of black murder by cops? Every time, people share our deaths for what? Hoping people get empathy? Stop it
— Tanya D :fist::skin-tone-5:� RiseUp! (@cypheroftyr) September 20, 2016
Black lives matter. Properly respecting black lives involves understanding that videos depicting a death of a human can be very triggering and respecting viewers by allowing them the chance to opt out of seeing them.
We need to use thoughtful content/trigger warnings.
Placing content warnings for videos is a lot less straightforward than placing content warnings for written pieces without even photographs like this one. A written content warning before a video in a social media post is a good start. They work well for posts that link to articles with videos in them when those videos don't sneak into the previews social media like to embed.
— Johnetta Elzie (@Nettaaaaaaaa) September 19, 2016
The reader is able to see this tweet without having to see a potentially triggering graphic video.
We need tools that help us place content warnings specifically for video formats.
If someone comes across an article or social media post with a video directly in it, text-based content warnings aren't enough.
Preview frames might include a triggering image. We can replace the default frame selected from a video with one that contains only the text of relevant content warnings. We need tools that make this process easy to do. Preferably, social media would include these tools in their upload interfaces themselves. Their inclusion would provide not just ease of use but also a convenient reminder to use them.
Also, the video may start playing anyway - maybe it autoplayed, maybe the reader accidentally clicked on it. Tools to supply text-only frames containing the relevant content warnings for the first ten seconds of a video would give someone the opportunity to pause or scroll past it before triggering content is forced upon them. This would provide a buffer even when video autoplay is turned on.
We need to release new features responsibly.
When Facebook and Twitter released their video autoplay features, they enabled it by default. Their decisions forced users to unnecessarily engage with content they didn't desire to see. We should not turn on new features like video autoplay by default.
Facebook does allow users to disable autoplay for videos, but that still doesn't make it okay to turn it on by default. Twitter allows you to turn off autoplay for some videos, but you're stuck with autoplay while you scroll through a Moment, regardless of your video autoplay settings. It is absolutely irrepsonsible of Twitter to force this feature on you in any situation.
Content warnings, tools, and responsible feature release policies will never be a complete solution, but maybe these tools and feature releases would allow for us to benefit from having triggering videos available without causing as much unnecessary harm.
 It's also a huge mobile data hog.
 This appears to now be in the "Data" section of settings on iOS, and I am told it is a similar process for Android.
Tagged as: black lives matter, content warnings, Facebook, release management, responsible feature release policy, social, social justice, social media, software engineering and computer science, tools, tools we need, triggering material, Twitter, video, video autoplay No Comments
Liz rides the subway is a series containing thoughts I have on the subway. On the 2 train to work, after watching two men of color have their bags searched at Grand Army Plaza:
I woke up today to my phone beeping in a pattern that wasn't my alarm:
WANTED: Ahmad Khan Rahami, 28-yr-old male. See media for pic. Call 9-1-1 if seen.
The alert was about the manhunt for the suspect believed to be responsible for the explosion in Manhattan on Saturday night and an earlier bombing in New Jersey.
I don't know what the best practices are for identifying a suspect. I don't know the best ways to involve lay people in helping law enforcement locate that suspect.
Kaveh Waddell notes that this was the first time this type of emergency alert was used for a manhunt and mentions the technical limitations of these alerts:
90 characters—less than a tweet’s worth—and it doesn’t support attachments, like photos. (That’s why this morning’s alert had to point people to the media for the suspect’s photo.) Messages also can’t include tappable URLs or phone numbers.
Maybe that's why the alert only has an extremely basic description of a man with an Arabic sounding name.
I get phone alerts from the New York Times, the Wall Street Journal, other media; their push notifications also lack photographs. I didn't remember to look for a photo before hopping on the train and losing signal. (Honestly, I don't think I'd have trusted myself enough to identify him if I had a photograph anyway.) The technical limitations that prevent the inclusion of photographs create fear without even serving their intended purpose.
Instead, I think of the people I know who might fit the "5'6" 200-pound male with brown hair, brown eyes, and brown facial hair" description, about the two men of color I saw getting their bags searched at the station. They shouldn't have to change their daily routines.
I'm taking my usual train to work, but I'm shaving and dressing better than usual, and leaving my usual electronics-filled backpack at home.
— Geoffrey Thomas (@geofft) September 19, 2016
These fears shouldn't be surprising. Subway ads casually depict commuters who sound like they are raising false alarms.
These subway ads. "It could have been a false alarm, but it also could have been real. I feel like a hero." pic.twitter.com/n3wCGcyaD7
— Anil Dash (@anildash) September 19, 2016
NYC law enforcement has a history of unconstitutional, aggressive racial profiling. Unarmed people of color are shot and killed by the police.
We need to do better.
The suspect was apprehended, and getting the public involved in the search wasn't a fruitless idea because that's how he was found. However, the bar owner identified him after watching CNN on his laptop, not from a push notification without a photo.
A little over a week ago, an unnecessary dick joke was sent to a mailing list I'm on. It's a mailing list related to a conference I go to, and the joke added absolutely no value to the conversation. Another young woman and I criticized this behavior, citing how it pushes women away from tech and conferences like this one. We were met with lots of men telling us that we were either being too sensitive, violating their First Amendment rights, or failing to note the large body of casual sexism towards men in the world. The men on this list constantly need to be reminded that women are the subject of the overwhelming majority of sexualized jokes, both within tech culture and in general. They consistently ignore how those jokes are part of the thousands of systemic paper cuts that, unsurprisingly, push women out of tech. Some men even had the gall to tell us off by saying that this wasn't the reason we don't have more women in tech because all the blame should fall on the leaky pipeline that doesn't get enough girls involved in tech, despite the fact that half the women who are already in tech leave. Somehow, words coming directly from women they know about their own experiences with tech culture can't possibly be valid.
This sort of casual sexism happens a lot on this mailing list. More overt sexism happens on this list, too. Sometimes I write emails or even blog posts that directly respond to it - Dresses, "dressing up", and the software industry, Why is it easier to teach girls to code than to teach ourselves to treat women well?, and I've been programming since I was 10, but I don't feel like a "hacker" - but it always takes a lot out of me. My responses are met with hostility because I dare to question the way things are, so I find myself putting excessive care into tiptoeing around men's feelings when presenting how their actions harm the careers and safety of people like me. Sometimes, even the most measured of my responses are mocked, so I find myself too scared to reply to some of the worst affronts against women. I get people responding to me off list for more than just clarification, presumably because it's easier to tell people off in private. My "allies" generally reply quietly to just me and maybe some of the other young women, but these "allies" rarely confront the list.
I'm unhappy with being more or less solely responsible for making this space a comfortable, safe environment for me, and frankly, no amount of effort on my part will be enough without the male majority prioritizing this, too. This general unwillingness to improve the culture is the reason I am skeptical of inviting other young women I know to the conference. I just don't have the energy for unappreciated and unaided diversity efforts to ensure they will be comfortable and safe. I am not alone in this thinking: other young women on the list also feel that we're constantly going at this alone. We're tired.
The last time I went to this conference was in 2014. All the attendees contribute to the programming through preparing talks ahead of time, scheduling ad hoc sessions for the evenings, and being present for less formal conversation over meals and in the hallways. The conference rents out an entire, somewhat remote resort, and the result is an immersive, intense experience. Somehow, despite being what some call "very introverted", I find the environment to primarily be giving and energizing. I have a lot of unique conversations with incredibly talented and influential individuals, many of whom I only see once a year at this event.
However, these talented individuals aren't a diverse group. At 27, I'm one of the youngest there by far. Women make up a small fraction of the attendees; young women even less so. (I can count the number of women 30ish or younger on one hand.) There aren't a lot of young men. The vast majority of attendees are white like me.
The conference provides a wide range of programming, mostly on detailed technical topics or broad, creative ideas for using tech to improve the world. Recently, there's been some efforts to improve diversity, including having a code of conduct and scheduling a diversity talk in the main programming. That talk, like all talks, was to be done by attendees - women - for free.
I'm not sure how effectively the scheduled talks are usually planned at this conference; I've only been involved in a few. Somehow, I spent about two hours completely planning and getting the right people for one of the technical hours, almost entirely at my leisure before the conference, but I had to spend nearly eight hours - the entire first evening of the two and a half day conference, late into the night - working on the diversity talk despite not even being the point person for it. The talk itself went okay, though I heard secondhand that some attendees still didn't believe there even was a problem. We moderated how we gave our time to questions well. (Often this involved not giving time to questions.) I guess I was proud of the talk.
But I was also really, really, really burned out for the rest of the conference. I ran some other, more casual technical chats in the late night programming - sessions covering topics that sit at heart of the conference, the topics I and other attendees go to the event to be a part of - but it felt more difficult than the other years I've gone. I slept a lot more, as I usually do when I'm emotionally drained, and probably missed out on some of the most interesting conversations since those have historically happened for me around 2am. I felt cheated because I didn't have the privilege of spending that fifth of the conference on the events it's advertised to be about because I needed to be doing diversity 101 instead.
I didn't go last year for a variety of reasons. Ultimately, it would have been a logistical nightmare, so I didn't even have to weigh the considerations above. This year, I could easily go, but these experiences sit heavily on my mind. If I choose not to go this year, this will be why.
 No, the first amendment does not cover speech, offensive or otherwise, on private mailing lists.
 This is not surprising at all. Men try to gaslight us into believing that our experiences in the industry aren't valid all the time.
 Women aren't the only group who are hurt on this list. There was a painful thread about disability a while back. Interestingly, a disproportionate amount of the support for the specific person affected was from women - both disproportionately to the percent of women on the list and to the overall percent of emails sent by women to the list.
 Though I haven't heard of this being enforced or of the women I know feeling comfortable enforcing it, so I'm skeptical. I know this conference, like so many, isn't safe - I know things have definitely happened in the years before, and my instinct is that a lack of incidents is almost certainly not why I haven't heard of the code being used since its institution.