this post was submitted on 11 Jul 2023
4 points (100.0% liked)

Technology

378 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

The policy changes come after an NBC News investigation last month into child safety on the platform.

top 8 comments
sorted by: hot top controversial new old
[–] kuontom@kbin.social 3 points 1 year ago
[–] yourgodlucifer@kbin.social 2 points 1 year ago

Why were these things allowed before?

Wtf

[–] reclipse@lemdro.id 1 points 1 year ago* (last edited 1 year ago) (1 children)

What does this mean? They were allowed before?? WTF!

[–] CybranM@kbin.social 4 points 1 year ago

More likely just not spotted, think of it like cockroaches, you dont "allow" cockroaches to live in your house but they very well might until you notice and exterminate them

[–] Acetanilide@kbin.social 1 points 1 year ago

The fact that this is a thing is disturbing at best.

[–] VulcanSphere@kbin.social 1 points 1 year ago (1 children)

CSAM is bad, no matter it is human-created or machine-generated.

[–] Thorny_Thicket@sopuli.xyz 0 points 1 year ago (1 children)

That's more of a philosophical question and I'm curious to hear why you think that way?

It's disturbing sure but so is scat porn but as long as no one is being harmed or forced to do something against their will I don't really see the problem.

If watching AI generated stuff is enough to take the edge off so that one can resist the urge to harm actual people then by all means.

[–] TwilightVulpine@kbin.social 2 points 1 year ago

You are forgetting the little detail that AI's output is based on what has been put in it. If a model can output something like that, it's likely because real CSAM has been fed into it. It's not sprouting from the aether.