this post was submitted on 20 Jan 2024
116 points (100.0% liked)

Futurology

40 readers
13 users here now

founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] Endorkend@kbin.social 21 points 10 months ago (1 children)

Its obvious when watching Google and Bing results.

Try to find any sort of objective information and the first 3-4 pages will almost all be AI generated garbage that took most of the information from some other highly outdated source that was garbage to begin with.

And as the engines are AI, they can automatically manipulate search results and keep dates and time stamps updated, so that whenever google visits, the page is always the "newest" information.

[–] snooggums@kbin.social 10 points 10 months ago (1 children)

When the first five results are the same sentences worded slightly differently like a freshman essay it is not a good sign that I will find a real answer.

[–] Endorkend@kbin.social 12 points 10 months ago (2 children)

The most annoying thing is that almost all tech information has fallen victim to this shit.

We now have to go back to pre-2000's methods of searching sites, by first identifying sites as reliable and then by relying on the sites own search engines to not suck.

In some cases, this is workable.

In cases where the sites have integrated Google searches, this is even more useless than using Google itself.

[–] Lugh@futurology.today 6 points 10 months ago (5 children)

Someone should invent a search engine that allows for curated sources. For most things, I'd love to search among the top few thousand sites, and exclude everything else.

[–] Semi-Hemi-Demigod@kbin.social 4 points 10 months ago

Yahoo started out like this. They had humans curating the sites that they searched, and it was pretty good until the web got too big for that to be efficient.

[–] Endorkend@kbin.social 3 points 10 months ago

I've got exactly that running on my home network for tech stuff.

I've thought of opening it up and even been thinking of building a group of people trustworthy to do the curation of sites, but I generally CBA interacting with people that much, I used to be highly active on forums like Madonion/futuremark, [H], etc, but those days are long behind me and these days, I post a bit on Reddit and talk to my wife and that's about it.

If things proceed to go to shit as much as it has, I may open it up anyway, mostly because maintaining and re-curating sites is a drag on its own.

The amount of sites that were once great tech spots that then got gulped up by the same ol same ol big tech sites to be turned into generic shit, it's not that they become uncountable, it's that it's almost every single one of them.

The best still seems to be simply posting questions on the few OG computer/tech forums that managed to survive.

For hardware and OS, places like ServeTheHome, [H], Anandtech, Techpowerup, etc.

For programming information, it's so murky I can't even suggest any specific sites anymore, not even Stack.

Phone/Tablet info, even XDA is getting murky, mostly because a lot of users there only watch the forum for their specific device, so if yours isn't one that is used by a lot of people, info gets super limited.

It's gotten bad out there.

[–] ApathyTree@lemmy.dbzer0.com 3 points 9 months ago* (last edited 9 months ago)

I haven’t used kagi, but I believe you can do exactly that with it. You do have to pay for the service, but that’s probably a good thing.

This is a link to the features page. It allows you to permanently ban or boost results from specific domains. But you may need to do some manual effort to make that happen, I don’t really know if there are community-curated backbones or anything for that.

But you can also see if the result is popular, and they seem to work pretty hard to make their platform worth the spend. Everything I’ve heard from people who use it is good.

https://blog.kagi.com/kagi-features

[–] Endorkend@kbin.social 2 points 10 months ago

No need to invent.

That's how originally search engines, including Google, Yahoo and all the other big ones worked.

You didn't get indexed by default.

You either got indexed by being submitted or by being referenced often by one or more well represented sites.

It's only later in the game they started crawling everything.

[–] Truck_kun 1 points 9 months ago

Someone should invent a search engine that allows for curated sources. For most things, I'd love to search among the top few thousand sites, and exclude everything else.

While I was typing up and fleshing out an idea on curated source lists for search engines, your post beat me to the punch.

As others have said, a curated internet is very old timey, and kind of limited, but I think what I fleshed out could work well with the modern internet, and be interesting. Maybe a major search engine might actually take up the task if user demand is there.

Quality of search results from google have been downward tending for years, and maybe it will boost the quality of results again (albeit with their ads still stuck in the results).

[–] Truck_kun 2 points 10 months ago* (last edited 10 months ago) (1 children)

Well, maybe Google can add a catered feature (not by them, that would suck), where by users can publish lists of trusted sites to search, and a user can optionally select a catered list of someone they trust, and Google will only search sites on that list.

Possibly allow multiplexing of lists.

So say I am looking for computer security, I can a catered list for sites "Steve Gibson" trusts, and a list of trustworthy sources "Bleeping Computer" uses, and anything I search for will use both lists as a base for the search.

Maybe it isn't something people even publish to the search engine; maybe they publish a file on their site that people can point the search engine to, like in Steve Gibson's case the fictitious file: grc.com/search.sources or create a new file format like .cse (catered search engine), grc.com/index.cse

Maybe allow individual lists to multiplex other lists. Something like this multiplexing two lists added to some additional sites, sub domains, directories, and * all subdomains:

multiplex: grc.com/search.cse

multiplex: bleepingcomputer.com/search.sources

arstechnica.com

*.ycombinator.com

stackoverflow.com

security.samesite.com

linux.samesite.com

differentsite.com/security

differentsite.com/linux

Honestly sounds like a horrible idea, but in a world filled with everything made by AI content, it may become a necessity.

Anyways, I officially put the above idea into the Public Domain. Anyone can use or modify it; feel free Google/Bing.

EDIT: It was posting all fake addresses on the same line, so trying to force them onto separate lines.

[–] Truck_kun 1 points 10 months ago

Apparently in the time I put thought into, typed up, changed things, etc, someone else posted a curating idea, so maybe it's not such a bad idea after all. AI content internet is going to suck.

To expand on the sounding like a horrible idea, it's mainly because if people rely too much on it, it creates a bubble, and limits the ability to discover new things or ideas outside of that bubble. But if outside of that bubble just sucks or is inaccurate, meh, what are you going to do? Especially if you are researching for something you are working on, could be a paper, a project, maybe something that could have dire financial or safety concerns if you get something wrong, and may need the information to be reliable.

[–] TheOakTree 12 points 9 months ago* (last edited 9 months ago)

I was trying to install a mod for a game yesterday. Gave it a google search. The first 5 links were trash sites that literally just said "download the mod files," "install the mod," "enjoy the game."

No other instructions, no links, irrelevant images and captions. Just random filler details about the base game.

Lol.

[–] Kolanaki@yiffit.net 6 points 9 months ago* (last edited 9 months ago)

Man, I have been accused of being AI tons of times in the last few years. I don't think people are very good at distinguishing reality from AI when it comes to text.

Researchers at the Amazon Web Services AI lab found that over half of the sentences on the web have been translated into two or more languages...

They attribute this to machine learning algorithms, yet even without those translations of translations of translations also have decreasing accuracy when done by people.

[–] Lugh@futurology.today 4 points 10 months ago

One of the ironies of Google leading so much cutting-edge AI development is that it is simultaneously poisoning its own business from within. Google Search is getting worse and worse, on an almost monthly basis, as it fills up with ever more SEO-spam. Early adopters are abandoning it for Chat-GPT-like alternatives; which means the mass market probably soon will too.

The other irony is that it will probably take AI to save us from AI-generated SEO spam. For everyone touting AI products that will write blogs and emails, there will be people selling products that detect their garbage and save you from wasting your time reading it.

[–] Endward23@futurology.today 3 points 9 months ago

I hear the message but to be honest, I can't believe it. There must something I don't get. But at a second thought, in the google search resolutes, I see a lot of dubious resultes.

[–] msage@programming.dev 2 points 9 months ago (1 children)

I would argue that most of the webpages have been generated without human input for a long time. So much automated scam with sketchy download links was the norm years before any 'modern AI' have been a thing.

[–] gandalf_der_12te@feddit.de 2 points 9 months ago

Yeah, the internet is mostly a tool for machines to communicate with one another.