this post was submitted on 22 Jul 2023
89 points (100.0% liked)

Technology

37742 readers
73 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The social media platform Bluesky recently had an incident where a user created an account with a racial slur as the handle. The Bluesky team quickly removed the account but realized they should have had automated filters in place to prevent such issues. They are now implementing a two-step automated filtering and flagging system for user handles while still involving human moderators. The team acknowledges they were too slow to communicate with the community about the incident and are working to improve their Trust and Safety team and communication processes going forward. They are committed to learning from this mistake and building a safer and more resilient social media platform over time.


Previous post about this topic https://beehaw.org/post/2152596

Bluesky allowed people to include the n-word in their usernames | Engadget

Bluesky, a decentralized social network, allowed users to register usernames containing the n-word. When reports surfaced about a user with the racial slur in their name, Bluesky took 40 minutes to remove the account but did not publicly apologize. A LinkedIn post criticized Bluesky for failing to filter offensive terms from the start and for not addressing its anti-blackness problem. Bluesky later claimed it had invested in moderation systems but the oversight highlighted ongoing issues considering Twitter co-founder Jack Dorsey backs the startup. The fact that Bluesky allowed such an obvious racial slur shows it was unprepared to moderate a social network effectively.

top 41 comments
sorted by: hot top controversial new old
[–] fades 92 points 1 year ago (7 children)

As a developer, I don’t see the issue?

They clearly didn’t cover username validation or at least to the extent of matching words, the devs fixed the problem within 40 minutes of the report (pretty great timing imo), and they have since implemented actions to avoid further issues.

What’s the fucking problem?? It’s still new as fuck, these things happen.

[–] dingus@lemmy.ml 52 points 1 year ago* (last edited 1 year ago) (1 children)

It's from a technical team that ostensibly should know better, because they have been working in this space for a long time. That's evidenced by their speed in handling it. However, it can easily be argued that this is a major thing that should have been implemented before invites started going out. Further, the amount of time it has taken for the company to muster a public response isn't encouraging, as they themselves seem to readily admit, by saying "they were too slow to communicate with the community about the incident and are working to improve their Trust and Safety team and communication processes going forward."

If this was the early 2000's and these people were the fresh-faced college students like Mark Zuckerberg who started these services, maybe this would be different, but it's not.

Jack Dorsey started Twitter in 2006, 17 years ago, when he was 29 years old. He's 46 now, and his nearly twenty years running a similar service didn't teach him to start with this kind of thing?

It speaks to them being oblivious to these being problems to begin with, and waiting for problems to arise before they respond to them. It's absolutely true that their response time was commendable, but why even need a response time when it could have easily been implemented in the closed beta, before it became an invitation based public beta. Which in turn doesn't speak to the likelihood of the service being run effectively in respect to consideration for harassment and abuse, first waiting for them to happen instead of being proactive.

I mean, you're a Beehaw user. Beehaw implements such things as username validation to prevent abuse and they're pretty fucking new too and they're not being run by a fucking nearly 20 god damn year microblogging veteran. Pot calling the damn kettle black. These Bluesky people are supposed to be professionals. If a bunch of ragtag nerds who do this shit in their spare time can figure it out, so can Jack fucking Dorsey.

EDIT: typos

[–] fades 12 points 1 year ago* (last edited 1 year ago)

Beehaw is utilizing lemmy whereas bluesky is not utilizing an existing framework.

Additionally, blue sky is still in active dev, early beta.

How is 40 minutes within the report too slow? It was 40 minutes in a beta product. They said they were too slow because that was the PR post lmao

there are many other factors such as timelines they are dealing with. I have had projects where the timings were tough (competition, sometimes just contract/SOW delivery date changes, etc) and UX was specifically disengaged or delayed.

Now is the ideal time to strike, fediverse and IG threads/lemmy/mastodon/etc are all still in flux when it comes to community favorites. They are all motivated to get out to market first.

The thought behind this approach is to get the foot in the door functionality wise and revamp UX overtime based on user feedback once established in addition to internal evolution of UI. It’s not like they said “yeah fuck it let’s let ‘em name themselves anything”, and more likely prioritized issues in the backlog and being a startup, have more flexibility to leave UX for later or losing the finer details of community safety to different phases or whatever. They want to move fast so this isn’t to say it’s “not important”

I submit that this whole situation was a failure of management as you said but not the disaster people are trying to make it out to be. These things happen when the dev team is forced to move quickly, it says nothing about the company’s values or what it really cares about. They clearly care, it’s just that they see getting to market as the penultimate hurdle that all others are secondary to. I don’t advocate for this approach as I prefer the show not tell approach but I understand the thought process.

Bluesky is not even released yet, it’s still in early beta. Yes that validation should have been there for signups, but I don’t agree that this signifies anything. How can anyone speak to their priorities and what is or isn’t essential to them when the product is still in active development with no release date in sight. Their sign up is a waitlist and the hosting provider (first question they ask) has options for dev and staging servers.

[–] ozoned 22 points 1 year ago (3 children)

NO! No bugs! Be perfect!

Seriously though, I've seen way dumber shit in "production" ready code.

I won't use Bluesky, but this happens all the time. I also don't see the issue.

I've told folks before, once and last time in front of developers (they didn't find it funny) , that all code is shit. Not because they're bad at it, but because it's impossible to account for EVERY possible factor. They always make a better idiot. US: "Here's this square." Them: "I cut the corners off to fit it into the round hole and it no works!"

[–] dingus@lemmy.ml 17 points 1 year ago (1 children)

They always make a better idiot.

I always prefer the Douglas Adams way of saying it:

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

[–] fades 3 points 1 year ago

There’s a good reason why most devs make subpar QAs haha

[–] Piers 7 points 1 year ago (2 children)

Failing to attempt to design and impliment an important feature at all is not the same as a bug. Unless I'm missing something they aren't saying "we did have systems in place to prevent people creating accounts with intentionally offensive usernames but we oopsed so it didn't work as intended until we fixed it." They're saying "it either didn't even occur to us our software needed that or we decided we just don't care so we didn't even try to do it until people pointed out that we were missing this important thing at which point we started working on it."

So, either they somehow just missed that this is something they need (which they really shouldn't have and suggests they aren't thinking even slightly about user conduct on their platform) or they did and decided they wanted to see if they could get away with just not doing it.

I understand it's easy to get lost in the core functionality of making the thing go but you can't lose sight of the actual intended outcome like this.

[–] chirospasm@lemmy.ml 6 points 1 year ago* (last edited 1 year ago) (2 children)

I suspect it may be a bit more along how you're describing here -- we expect some user experience patterns to already be in place, if not considered, like not being able to select inappropriate handles. Former Twitter folks should know 'better.' From the outside looking in, it tracks.

I wonder if the Bluesky team, right now at least, is more engineer / dev heavy, and they have not brought on UX folks to help drive a product design that considers patterns we'd be used to experiencing. They may be operating pretty lean.

An idea, at least.

[–] Piers 3 points 1 year ago (1 children)

I think that's quite likely yes. But that in and off itself indicates that management consider stuff like UX to be non-essential expertise that sits outside of what is required for a functioning lean operation.

[–] fades 3 points 1 year ago

Not entirely true, there are many other factors such as timelines to deal with. I am a professional developer and I have had projects where the timings were tough (competition, sometimes just contract/SOW delivery date changes, etc) and UX was specifically disengaged or delayed.

Now is the ideal time to strike, fediverse and IG threads/lemmy/mastodon/etc are all still in flux when it comes to community favorites. They are all motivated to get out to market first.

The thought behind this is to get the foot in the door functionality wise and revamp UX overtime based on user feedback once established in addition to internal evolution of UI. It’s not like they said “yeah fuck it let’s let ‘em name themselves anything”, and more likely prioritized issues in the backlog and being a startup, have more flexibility to leave UX for later or losing the finer details of community safety to different phases or whatever.

It’s truly difficult to point to the cause from the outside looking in, yet many here are happy to take this whole situation as evidence of either malice or incompetence. These days when you release a product it’s not done.

These days the following are common: Day-one patches, frequent weekly/monthly/etc. update cadences, pushing to dev/dit/qa/prod rapidly, always keeping production in line with or not far behind develop (aka, merging to dev, run CI/CD, waterfalling thru each environment all the way to prod).

I submit that this whole situation was a failure of management as you said but not the disaster people are trying to make it out to be. These things happen when the dev team is forced to move quickly, it says nothing about the company’s values or what it really cares about. They clearly care, it’s just that they see getting to market as the penultimate hurdle that all others are secondary to. I don’t advocate for this approach as I prefer the show not tell approach but I understand the thought process.

Bluesky is not even released yet, it’s still in early beta. How can anyone speak to their priorities and what is or isn’t essential to them when the product is still in active development with no release date in sight. Their sign up is a waitlist and the hosting provider (first question they ask) has options for dev and staging servers.

[–] astraeus@programming.dev 2 points 1 year ago

I would say it’s likely they are very lean. From what I’ve heard it isn’t more than a few people closely working with Jack Dorsey full-time right now. Here’s a blog post from last year with some hints as to the size of the core team.

While they definitely know better, it’s a closed beta and most of the users have already been vetted prior to invitation. The fact that someone made a bad name means they were testing viability, which is what a closed beta is for. A team of even forty or fifty people working on a fresh project have plenty of other problems and issues to address, even if username filtering is an important one.

[–] ozoned 2 points 1 year ago (1 children)

We don't know they didn't design and implement it. Happens all the time where you imolement a feature, it works, there's a regression and you have no clue. 40 min to resolve means it's there, no way you're building that completely, testing it, oushing it in that tineframd.

I coukd be wrong.

[–] Piers 2 points 1 year ago (1 children)

It says in the post:

"realized they should have had automated filters in place to prevent such issues. They are now implementing a two-step automated filtering and flagging system for user handles while still involving human moderators."

They wouldn't need to implement a system they already implemented but wasn't working properly. They'd just be fixing it.

[–] ozoned 2 points 1 year ago

I misinterpreted that part or missed it. With you now. Thank you!

[–] agegamon 5 points 1 year ago* (last edited 1 year ago) (1 children)

One of my favorite jokes ever (forgive the dated lingo)

A code reviewer walks into a bar.

...runs into the bar. ...skips into the bar. ...handstands into the bar. ...does the hula into the bar. ...brings four people they just met into the bar.

And orders: ...1 beer ...10 beers ...99999 beers ...null beers ...a Pepsi ...a 10" personal pizza ...4 orders of salted peanuts ...DROP TABLE orders of salted peanuts

Nothing goes wrong.

Another person walks into the bar and asks to use the bathroom.

The bar goes up in flames.

*Edit - I forgot the drop table peanuts

[–] marco 2 points 1 year ago

Little Bobby Tables ;)

[–] audaxdreik@pawb.social 18 points 1 year ago* (last edited 1 year ago)

It's the one-two punch of "why wasn't it already in place" and "very bad, slow communication" wrapped up in "a team that really should've known better already". If any one of those had been different maybe the reaction wouldn't've been so strong. This just isn't what you want to see from a new service that's hoping to take on the entrenched Twitter (no matter how rapidly it may be declining, holdouts will be strong) and the evil Threads (which jumped itself so far ahead in userbase through ... shady tactics).

At the end of the day, this is a product. We have a right to demand better service if they want us using it (how they make a profit isn't our concern). This is the best time to strike too, and lay down the groundwork for what kind of community that we want to foster there. Sending a strong message that we want Twitter but without the bad stuff that made us leave is very important. Did some people take it even way too far? Probably maybe, but you should know by now being online that you can't let the worst of everyone represent you.

[–] HughJanus@lemmy.ml 8 points 1 year ago

People just need things to be angry about I guess.

[–] okiokbar@lemm.ee 6 points 1 year ago (2 children)

Companies show what they care about by what problems they choose to focus on, or not. If you build a Twitter competitor and you don’t invest in community safety from the start, you’re showing what you value 🤷🏼‍♂️

[–] fades 12 points 1 year ago (1 children)

I would argue responding within the hour without warning indeed does show they care about the issue at hand.

Building a SaaS is a lot of work and the finer details like username validation can skip thru the cracks, especially when it comes to startups.

You are making out username validation to encompass the entirety of community safety and that’s quite a stretch as well. This wasn’t malicious and they showed they cared by reacting asap and providing a post Morten with steps forward to avoid in the fututeZ

They CLEARLY care. They are just rushing out the door because of Twitter Facebook threads fediverse, etc. competition is only getting more fierce. That isn’t an excuse but an explanation of how these finer details can get missed, especially when they already accomplished them when they did Twitter.

You are taking a small technical hiccup as evidence of their culture as a whole, which is extremely unfair but okay.

For the record I never cared about Twitter and I certainly don’t care about blue sky or whatever it is. There is further nuance whether you choose to see it or not

[–] okiokbar@lemm.ee 3 points 1 year ago

This isn’t happening in isolation. Bluesky has shown itself to not care about community safety in the past, their plans are (more or less) “allow everything and then try and hide the bad things from people that don’t want to see it”. Naturally, this hasn’t worked at all. (Who could have guessed?)

Not doing the obvious things on community safety is the plan. I guess it’s nice that they are responding in this case, but it takes a bit more than that to regain that trust.

[–] JackbyDev@programming.dev 2 points 1 year ago (1 children)

40 minutes. That's how quick they solved it. To me that sounds like showing what you value.

[–] okiokbar@lemm.ee 5 points 1 year ago

You treat this as a bug, others treat it as another sign of a lack of forethought on the core of their offering.

If this happened in isolation, people would be forgiving (or wouldn’t care, given how small Bluesky is), but it’s not. Bluesky has a whole theory about moderation and community safety, and half-assing fits with that theory.

[–] tone 5 points 1 year ago

They did conduct username validation. Many words, including “fuck,” we’re disallowed at username creation. They just chose to not include racial epithets in the filter list.

[–] CapedStanker 3 points 1 year ago* (last edited 1 year ago)

This would be like creating a family safe environment and forgetting to require clothing so a bunch of nudists show up and you're like "WHOOPS? WHO WOULD HAVE THOUGHT OF CLOTHING?" Then when the parents are like "what the hell?" you replied with "I MEAN THIS STUFF HAPPENS YOU KNOW?"

disallowing the nword in the username is like beginner level business requirements dude.

[–] CherryClan 71 points 1 year ago (1 children)

this is why diversity initiatives are important. a team of all white dudes is gonna have some blind spots

[–] HughJanus@lemmy.ml 20 points 1 year ago (1 children)

Right. Us stupid white people didn't realize bad words existed!

[–] Lionir 108 points 1 year ago (4 children)

Your comment is in bad faith. Take a step back to consider how you interact with people.

load more comments (4 replies)
[–] Blaze@sopuli.xyz 13 points 1 year ago (1 children)
[–] CapedStanker 2 points 1 year ago

Thanks, Ill check it out and direct any of my friends there that may want a space like this.

[–] cupcakezealot@lemmy.blahaj.zone 8 points 1 year ago (1 children)

On one hand they definitely should have been aware about the possibility of abuse like this, especially since so many of them came from Twitter but on the other hand I've always thought that it was asking a lot to have to have developers be exposed and put in a list of slurs specifically to be able to block them out. :(

[–] dingus@lemmy.ml 7 points 1 year ago (2 children)

They probably don't have a list of slurs as much as they use partial variations in Regular Expressions for filtering, which I guess could be better or worse, depending on how you look at it. Better: they don't have to see the whole slur. Worse: they have to think deeply about the slur and all the variations of it that might arise.

[–] peter@feddit.uk 5 points 1 year ago (2 children)

As they mentioned in the blog post though, simply matching slurs inside of a string will ban a lot of innocent people

[–] ninchuka@lemmy.one 3 points 1 year ago

yeah wordlists for any kind of moderation can easily catch false positives

[–] JackbyDev@programming.dev 5 points 1 year ago* (last edited 1 year ago)

I remember some post where someone's username Nasser got censored to N***er making it look way fucking worse. One of the Dark Souls games.

[–] HairHeel@programming.dev 7 points 1 year ago

Hmm, does lemmy have username filters?

[–] CapedStanker 5 points 1 year ago* (last edited 1 year ago)

This could be an innocuous regex bug, but considering the founder and that there have already been reports of black users being harangued on the platform, I don't think we should give any benefit of the doubt. It definitely helps that they fixed it so quickly.. normally a shit company wouldn't do anything for months and months or even years or try to fall behind some "free speech" simpleton kindergarten reasoning. People have been putting nword in their usernames for decades so I don't think this is something where the devs can claim they didn't see this coming.