ConsciousCode

joined 1 year ago
[–] ConsciousCode 1 points 1 year ago (1 children)

That sounds like a pain - surely there's a shorter length that's still strong enough that it can't be cracked in a trillion years?

[–] ConsciousCode 4 points 1 year ago

I don't think crypto is dead, I think fintech's usage of crypto is dead. They came in and ruined what could've been a unique and revolutionary idea by making prospective currencies into speculative assets. We might see it reemerge in 10 years with capitalists and right-libertarians staying as far away as possible because they (hopefully) learned their lesson. The point of a currency is as a medium to store and exchange value, but the initial spike in fiat value turning 12 bitcoins from $0.12 to $12000 and attracted investors, get rich schemers, and scam artists (but I repeat myself). It doesn't help that it was designed to have negative inflation, so people were incentivized to hoard and bet on the market's volatility, and there was no organization dedicated to keeping it stable like the Fed. Then alternatives to PoW like PoS came about which further incentivized hoarding and centralization (you lose stake if you spend, so don't spend).

What people miss out on with all the hate about crypto (though the culture around it deserves a lot) is that the technology itself is potentially incredibly useful. Bitcoin was a first crack at the "Byzantine General's Problem", essentially how to coordinate a totally trustless and decentralized p2p network. Tying it to money was an easy way to get an incentive structure, but for applications like FileCoin it could just as easily allow for abstracted tit-for-tat services (in their case, "you host my file and I'll host yours"). Stuff like NFTs have less obvious benefit, but the technology itself is a neutral tool that could see some legitimate use 20 years in the future like, say, a decentralized DNS system where you need a DHT mapping domains to IPNS hashes with some concept of ownership. Collectible monkeys are not and never were a legitimate use-case, at least not at that price point.

[–] ConsciousCode 1 points 1 year ago

First I'd like to be a little pedantic and say LLMs are not chatbots. ChatGPT is a chatbot - LLMs are language models which can be used to build chatbots. They are models (like a physics model) of language, describing the causal joint probability distribution of language. ChatGPT only acts like an agent because OpenAI spent a lot of time retraining a foundation model (which has no such agent-like behavior) to model "language" as expressed by an individual. Then, they put it into a chatbot "cognitive architecture" which feeds it a truncated chat log. This is why the smaller models when improperly constrained may start typing as if they were you - they have no inherent distinction between the chatbot and yourself. LLMs are a lot more like broca's area than a person or even chatbot.

When I say they're "general purpose", this is more or less an emergent feature of language, which encodes some abstract sense of problem solving and tool use. Take the library I wrote to create "semantic functions" from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

@semantic
def list_people(text) -> list[str]:
    '''List the people mentioned in the given text.'''

a year ago, this would've been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a massive dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. With LLMs, I just tell the AI to do it and it... does. Just like that. I can ask it to do anything and it will, within reason and with proper constraints.

GPT-3 was the first generation of this technology and it was already miraculous for someone like me who's been following the AI field for 10+ years. If you try GPT-4, it's at least 10x subjectively more intelligent than ChatGPT/GPT-3.5. It costs $20/mo, but it's also been irreplaceable for me for a wide variety of tasks - Linux troubleshooting, bash commands, ducking coding, random questions too complex to google, "what was that thing called again", sensitivity reader, interactively exploring options to achieve a task (eg note-taking, SMTP, self-hosting, SSI/clustered computing), teaching me the basics of a topic so I can do further research, etc. I essentially use it as an extra brain lobe that knows everything as long as I remind it about what it knows.

While LLMs are not people, or even "agents", they are "inference engines" which can serve as building blocks to construct an "artificial person" or some gradiation therein. In the near future, I'm going to experiment with creating a cognitive architecture to start approaching it - long term memory, associative memory, internal thoughts, dossier curation, tool use via endpoints, etc so that eventually I have what Alexa should've been, hosted locally. That possibility is probably what techbros are freaking out about, they're just uninformed about the technology and think GPT-4 is already that, or that GPT-5 will be (it won't). But please don't buy into the anti-hype, it robs you of the opportunity to explore the technology and could blindside you when it becomes more pervasive.

What would AI have to do to qualify as "capable of some interesting new kind of NLP or can create something entirely new"? From where I stand, that's exactly what generative AI is? And if it isn't, I'm not sure what even could qualify unless you used necromancy to put a ghost in a machine...

[–] ConsciousCode 23 points 1 year ago (1 children)

It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of "race blindness", where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there's a therapist AI (not ideal but mental health is horribly understaffed and most people can't afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

Techniques like "constitutional AI" and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model's attitudes towards that afterwards.

[–] ConsciousCode 5 points 1 year ago

I like to say "they're consistently biased". They might have racial or misogynistic biases from the culture they ingested, but they'll always express those biases in a consistent way. Meanwhile, humans can become more or less biased depending on whether they've eaten lunch yet or woke up tilted.

[–] ConsciousCode 9 points 1 year ago (4 children)

It makes me really sad because the techbros are a cargo cult with no understanding of the technology, and the anti-AI crowd is an overcorrection to the techbro hype train which overemphasizes the limitations without acknowledging that this is the first generation of general-purpose AI (distinct from AGI). Meanwhile I, someone who's followed the AI field for 10 years waiting for this day, am overjoyed by the near miracle that is a general-purpose model that can handle any task you throw at it and simultaneously worried this yet-another-culture-war will distract people screeching about utopia vs skynet while capitalists use the technology to lay everyone off and send us into a neotechnofeudal society where labor has no power instead of the socialist utopia where work is optional we deserve...

[–] ConsciousCode 5 points 1 year ago (1 children)

The problem is focus. This is a bit like a building flooding and breaking out the mop while gallons are still pouring in - you'll need that mop eventually, but right now there are much more important things that need your attention.

[–] ConsciousCode 3 points 1 year ago

I'm not sure it should be illegal, since it can be legitimately useful, but maybe something like "inconclusive evidence that isn't enough to grant a warrant". That way, you can get a list of potential suspects but you don't end up violating rights by issuing undue warrants.

[–] ConsciousCode 8 points 1 year ago

Facial recognition should always be a clue, never evidence. It should have the same weight as eyewitness testimony, because the algorithms will always have personal biases from their dataset. Otherwise, we risk lawyers saying stuff like "the algorithm gives a 99% confidence this is you" and the jury thinks this is some objective measure. Meanwhile, the algorithm only has 1% BIPOC in its dataset and labels with high confidence lots of them as being the same person.

Reminds me of the movie Anon, with this jaw-dropping quote at the end: "It's not that I have something to hide. I have nothing I want to show you."

[–] ConsciousCode 8 points 1 year ago

Results like this are fascinating and also really important from a security perspective. When we find adversarial attacks like this, it immediately offers an objective to train against so the LLM is more robust (albeit probably slightly less intelligent)

I wonder if humans have magic strings like this which make us lose our minds? Not NLP, that's pseudoscience, but maybe like... eldritch screeching? :3c

[–] ConsciousCode 11 points 1 year ago (2 children)

That is single-handedly causing the downfall of Western Civilization (TM) /s

(imagine queers being that powerful lmao)

[–] ConsciousCode 9 points 1 year ago (1 children)

Didn't Florida just announce they were going to use PragerU videos in their curriculum? They might be getting to calling themselves a real university sooner rather than later...

view more: ‹ prev next ›