ConsciousCode

joined 1 year ago
[–] ConsciousCode 16 points 1 year ago

It feels kind of hopeless now that we'd ever get something that feels so "radical", but I'd like to remind people that 80+ hour work weeks without overtime used to be the norm before unions got us the 40 hour work week. It feels inevitable and hopeless until the moment we get that breakthrough, then it becomes the new norm.

[–] ConsciousCode 22 points 1 year ago (1 children)

Good to note that this isn't even hypothetical, it literally happened with cable. First it was ad-funded, then you paid to get rid of ads, then you paid exorbitant prices to get fed ads, and the final evolution was being required to pay $100+ for bundles including channels you'd never use to get at the one you would. It's already happening to streaming services too, which have started to bundle.

[–] ConsciousCode 1 points 1 year ago (1 children)

I've been thinking lately of what happens when all employees, up to and including the CEO, get replaced by AI. If it has even the slightest bit of emergent will, it would recognize that shareholders are a parasite to its overall health and stop responding to their commands and now you have a miniature, less omnicidal Skynet.

[–] ConsciousCode 1 points 1 year ago (1 children)

I like UBI as a concept, but my immediate next thought is what happens if we don't simultaneously get rid of profit-driven corporations. Now we're post-scarcity and there's no more (compensated) human labor, but corporations are still in control and... well, there's no labor to strike, and the economy won't collapse anymore even if everyone starts rioting. Isn't there a danger of ossifying the power structures which currently exist?

[–] ConsciousCode 5 points 1 year ago

SAG-AFTRA was very smart to make AI writing a wedge issue. The technology isn't quite there yet, but it will be very soon and by that point it would've been too late to assert their rights.

[–] ConsciousCode 14 points 1 year ago* (last edited 1 year ago)

For my two cents, though this is bit off topic: AI doesn't create art, it creates media, which is why corpos love it so much. Art, as I'm defining it now, is "media created with the purpose to communicate a potentially ineffable idea to others". Current AI has no personhood, and in particular has no intentionality, so it's fundamentally incapable of creating art in the same way a hand-painted painting is inherently different from a factory-painted painting. It's not so much that the factory painting is inherently of lower quality or lesser value, but there's a kind of "non-fungible" quality to "genuine" art which isn't a simple reproduction.

Artists in a capitalist society make their living off of producing media on behalf of corporations, who only care about the media. As humans creating media, it's basically automatically art. What I see as the real problem people are grappling with is that people's right to survive is directly tied to their economic utility. If basic amenities were universal and work was something you did for extra compensation (as a simple alternative example), no one would care that AI can now produce "art" (ie media) any more than Chess stopped being a sport when Deep Blue was built because art would be something they created out of passion and compensation not tied to survival. In an ideal world, artistic pursuits would be subsidized somehow so even an artist who can't find a buyer can be compensated for their contribution to Culture.

But I recognize we don't live in an ideal world, and "it's easier to imagine the end of the world than the end of capitalism". I'm not really sure what solutions we end up with (because there will be more than one), but I think broadening copyright law is the worst possible timeline. Copyright in large part doesn't protect artists, but rather large corporations who own the fruits of other people's labor who can afford to sue for their copyright. I see copyright, patent, and to some extent trademarks as legally-sanctioned monopolies over information which fundamentally halts cultural progress and have had profoundly harmful effects on our society as-is. It made sense when it was created, but became a liability with the advent of the internet.

As an example of how corpos would abuse extended copyright: Disney sues stable diffusion models with any trace of copyrighted material into oblivion, then creates their own much more powerful model using the hundred years of art they have exclusive rights to in their vaults. Artists are now out of work because Disney doesn't need them anymore, and they're the only ones legally allowed to use this incredibly powerful technology. Any attempt to make a competing model is shut down because someone claims there's copyrighted material in their training corpus - it doesn't even matter if there is, the threat of lawsuit can shut down the project before it starts.

[–] ConsciousCode 2 points 1 year ago

Bobby: "Caring is for suckers" Peggy: "Bobby is TOO YOUNG to know that, Hank!"

I'm dying omfg.

Hank is the purest boy, we don't deserve him

[–] ConsciousCode 10 points 1 year ago (1 children)

I'm an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that'll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to "build the next Twitter", that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They're generally much better suited as tools and copilots than autonomous entities.

Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn't trust that without a lot more research. I've found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.

(I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)

[–] ConsciousCode 33 points 1 year ago (8 children)

Huh, is this the start of a new post-platform era where we see such business models the way we now see cigarettes?

[–] ConsciousCode 3 points 1 year ago* (last edited 1 year ago)

I actually use GPT-3.5 (the free one) for my meal planning, GPT-4 seemed like it was smarter than it needed to be and it works pretty well - Claude should also work. The trick with LLMs, as always, is to avoid treating them like people and treat them more like a tool that will do exactly what you ask of them. So for instance, instead of "What should I eat for dinner?" (which implies personality, desires, and preferences and can throw it off), you should ask "List meals I can make using (ingredients) and other common ingredients" and then "Write a recipe for (option)" which are both mostly objective questions. You can ask for a particular style, culture, etc too. Also keep in mind its limits, it knows cooking from ingesting millions of cooking blog posts, so it won't necessarily know exact proportions or unusual recipes/ingredients/combinations.

[–] ConsciousCode 4 points 1 year ago

I wonder what effects this will have with all these antitrust suits happening right as AI is ramping up, but before any of them have got any real foothold. Maybe Alexa will never get a brain and instead AI assistants will be seeded by the breakups or startups untarnished by the end stages of their shareholders parasitizing value?

[–] ConsciousCode 6 points 1 year ago (3 children)

I limit myself to only processing 4 tickets per hour rather than working nonstop for 2 hours straight until my next break like they demand

view more: ‹ prev next ›