this post was submitted on 21 Jul 2023
935 points (100.0% liked)
Technology
37735 readers
43 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I, potentially, wanted to abuse a system, I'd probably come up with a way to modify that system such that I can abuse it, but with a plausible explanation as to why I'm not actually going to do that, so that others will agree to it.
But let's assume, for the sake of the argument, that Google and/or the people who wrote this are actually acting in good faith. That still won't stop other large companies like Microsoft, Apple, etc. or even future Google employees from abusing the system later on.
Yes, the potential for abuse is the big deal here. And you know humans, if it can be abused, someone will try.
Sure, but this is also a solution for the existing abuse that runs rampant. Which abuse is better?
I’m sure these same arguments against this were made for anti-virus software back in the beginning. “They’re only doing this so in the future they can flag all their competitors programs as viruses” and “they’re only doing this so they can choose who can use what”. The parallels are strong.
Is there a way to stop the existing abuse without introducing a different kind of abuse? Ideally, that's what we should aim for, if possible at all.
If that's not possible, restricting people's freedoms in the digital world (or the real world, for that) to prevent some from abusing such freedoms doesn't sound such a great proposition. As for "which abuse is better", I'd argue that if I have to be abused one way or another, I'd prefer to be free and in control so I have a chance to stop it myself ;)
(what freedoms, you might say? freedoms to run my own choice of operating system, my choice of browser, etc. on a computer that I own, maybe even built myself, and not be prevented from accessing the internet at large)
And I'm sure some of those companies, or some of those companies' employees, wrote some viruses themselves ;) But really, we can only speculate. Most are definitely legit and helpful.
The key here is, who is in control: the user of the software, or the company that made it? I'd say even for antiviruses, the user is in control, can choose a different antivirus or no antivirus at all (like me). In this Google proposal, it seems Google and other big corporations will be in control and not the user. That's the reason why it's bad. If I have to be abused, at least I like being in control so I can (try to) prevent it.