this post was submitted on 20 Jun 2024
549 points (100.0% liked)

Science Memes

228 readers
80 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 31 comments
sorted by: hot top controversial new old
[–] maegul@lemmy.ml 143 points 3 months ago (8 children)

Yea, academics need to just shut the publication system down. The more they keep pandering to it the more they look like fools.

load more comments (8 replies)
[–] KillingTimeItself@lemmy.dbzer0.com 45 points 3 months ago (1 children)

i think this is less of a meme, and more of a scientifically dystopian fun fact, but sure.

[–] skillissuer@discuss.tchncs.de 4 points 3 months ago (1 children)

the fact, is in fact, rather fun(ny)

[–] NigelFrobisher@aussie.zone 38 points 3 months ago (1 children)

The famously uneditable PDF format.

[–] boonhet@lemm.ee 12 points 3 months ago

In metadata, no less.

[–] tuna@discuss.tchncs.de 37 points 3 months ago

Imagine they have an internal tool to check if the hash exists in their database, something like

"SELECT user FROM downloads WHERE hash = '" + hash + "';"

You set the pdf hash to be 1'; DROP TABLE books;-- they scan it, and it effectively deletes their entire business lmfaoo.

Another idea might be to duplicate the PDF many times and insert bogus metadata for each. Then submit requests saying that you found an illegal distribution of the PDF. If their process isn't automated it would waste a lot of time on their part to find the culprit Lol

I think it's more interesting to think of how to weaponize their own hash rather than deleting it

[–] chemicalwonka@discuss.tchncs.de 26 points 3 months ago

Elsevier is the reason I donate to Sci-Hub.

[–] Dark_Dragon@lemmy.dbzer0.com 22 points 3 months ago (4 children)

Can't we all researcher who is technically good at web servers start a opensource alternative to these paid services. I get that we need to publish to a renowned publisher, but we also decide together to publish to an alternative opensource option. This way the alternate opensource option also grows.

[–] BeardedGingerWonder@feddit.uk 10 points 3 months ago (1 children)
[–] Dark_Dragon@lemmy.dbzer0.com 2 points 3 months ago (1 children)

Does it have all the new research paper regarding medicine and pharmacological action and newer drug interactions and stuff?

[–] JackbyDev@programming.dev 3 points 3 months ago

That's not what was asked for though lol

[–] Sal@mander.xyz 8 points 3 months ago* (last edited 3 months ago) (2 children)

Some time last year I learned of an example of such a project (peerreview on GitHub):

The goal of this project was to create an open access "Peer Review" platform:


Peer Review is an open access, reputation based scientific publishing system that has the potential to replace the journal system with a single, community run website. It is free to publish, free to access, and the plan is to support it with donations and (eventually, hopefully) institutional support.

It allows academic authors to submit a draft of a paper for review by peers in their field, and then to publish it for public consumption once they are ready. It allows their peers to exercise post-publish quality control of papers by voting them up or down and posting public responses.


I just looked it up now to see how it is going... And I am a bit saddened to find out that the developer decided to stop. The author has a blog in which he wrote about the project and about why he is not so optimistic about the prospects of crowd sourced peer review anymore: https://www.theroadgoeson.com/crowdsourcing-peer-review-probably-wont-work , and related posts referenced therein.

It is only one opinion, but at least it is the opinion of someone who has thought about this some time and made a real effort towards the goal, so maybe you find some value from his perspective.

Personally, I am still optimistic about this being possible. But that's easy for me to say as I have not invested the effort!

[–] fossilesque@mander.xyz 2 points 3 months ago (1 children)

I do like the intermediaries that have popped up, like PubPeer. I highly recommend that everyone get the extension as it adds context to many different articles.

https://pubpeer.com/

[–] Sal@mander.xyz 2 points 3 months ago (1 children)

That's really cool, I will use it

[–] fossilesque@mander.xyz 2 points 3 months ago

It's been surprisingly helpful, it even flags linked pages, like on Wikipedia.

[–] barsoap@lemm.ee 2 points 3 months ago

This kind of thing needs to be started by universities and/or research institutes. Not the code part, but the organising the first journals part. It's going to get nowhere without establishment buy-in.

[–] No_Change_Just_Money@feddit.de 3 points 3 months ago (1 children)

I mean a paper is renowned if many people cute from it

We could just try citing more free papers, whenever possible (as long as they still have peer review)

[–] barsoap@lemm.ee 2 points 3 months ago

Citation count is a shoddy metric for a paper's quality. Not just because there's citation cartels, but because the reason stuff gets cited is not contained in the metric. And then to top it all off as soon as a metric becomes a target, it ceases to be a metric.

[–] vin@lemmynsfw.com 2 points 3 months ago

Challenge is how to jump start a platform where the researchers come to

[–] NeatNit@discuss.tchncs.de 14 points 3 months ago (3 children)

I kind of assume this with any digital media. Games, music, ebooks, stock videos, whatever - embedding a tiny unique ID is very easy and can allow publishers to track down leakers/pirates.

Honestly, even though as a consumer I don't like it, I don't mind it that much. Doesn't seem right to take the extreme position of "publishers should not be allowed to have ANY way of finding out who is leaking things". There needs to be a balance.

Online phone-home DRM is a huge fuck no, but a benign little piece of metadata that doesn't interact with anything and can't be used to spy on me? Whatever, I can accept it.

[–] henfredemars@infosec.pub 37 points 3 months ago (1 children)

I object because my public funds were used to pay for most of these papers. Publishers shouldn’t behave as if they own it.

[–] NeatNit@discuss.tchncs.de 11 points 3 months ago

That's true. I was actually thinking/talking about this practice in general, not specifically with regards to Elsevier.

I definitely agree that scientific journals as they are today are unacceptable.

[–] cron@feddit.de 8 points 3 months ago

Definitely better than some of the DRM-riddled proprietary eBook formats.

[–] aberrate_junior_beatnik@midwest.social 5 points 3 months ago (1 children)

Plus, if you have two people with legit access, you can pretty easily figure out what's going on and defeat it.

[–] blindsight 2 points 3 months ago

It would be pretty trivial for a script to automatically detect and delete tags like this, I would think. Diff two versions of the file and swap all diff characters to any non-display character.