You can't get GI through spicy autocorrect ? π±
Futurology
Added to this finding, there's a perhaps greater reason to think LLMs will never deliver AGI. They lack independent reasoning. Some supporters of LLMs said reasoning might arrive via "emergent behavior". It hasn't.
People are looking to get to AGI in other ways. A startup called Symbolica says a whole new approach to AI called Category Theory might be what leads to AGI. Another is βobjective-driven AIβ, which is built to fulfill specific goals set by humans in 3D space. By the time they are 4 years old, a child has processed 50 times more training data than the largest LLM by existing and learning in the 3D world.
I wonder where the line is drawn between an emergent behavior and a hallucination.
If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful.
But if someone is encouraging hallucinations and wants fiction, they might think it's an emergent behavior.
In humans, what is the difference between an original thought, and a hallucination?
Hallucinations are unlike Human creative output. For one, ai hallucinations are unintentional. There's plenty of reasons if you actually think about the question why they are not the same. They are at best dreamlike, but dreams are an intentional process.
Sure there is intentional creative thought. But there are also unintentional creative thoughts. Moments of clarity, eureka moments, and strokes of inspiration. How do we differentiate these?
If we were to say that it is because of our subconscious is intentionally promoting these thoughts. Then we would need a method to test that, because otherwise the difference is moot.
Similar to how one might define the I
in AGI
it's hard to form a consensus on general and often vague definitions like these.
You are assigning far more vague grandeur to ai hallucinations than what they are in practice.
Maybe it's this arbitrary word, hallucination
? Which was recently borrowed from the human experience to explain why something which normally is factual like a computer is not computing facts.
But if one were to think about it, what is the difference between a series on non factual hallucinations in a model and a person's individual experience of the world?
- If two people eat the same food item they might taste different things.
- they might have different definitions of the same word.
- they might remember that an object was a different color then someone's recording could prove. There is a reason why eye witness testimony is considered unreliable in the court of law.
Before, we called these bugs or even issues. But now that it's in this black box of sorts that we can't alter the decision making process of as directly as before. There is this more human sounding name all of a sudden.
To clarify, when an llm gets a fact wrong because it has limited context or because it's foundational model is flawed, is that the same result as the experience someone has after consuming psychedelic mushrooms? No, I wouldn't say so. Nor is it the same when a team of scientists try to make a model actively hallucinate so they can find new chemical compounds.
Defining words can sometimes be very tricky, especially when they are applying to multiple areas of study. The more you drill into a definition, the more it becomes a metaphysical debate. But it is important to have these discussions because even the definition of something like AGI
keeps changing. And infact only exist because the goal posts for a AI
moved so much. What will stop a company which is trying to attract investors from just slapping an AGI
label on their next release? And how will we differentiate what the spirit of the word is trying to convey from the sales pitch?
Hallucinations are not qualia.
Please go talk to an llm for hallucinations, you can use duck duck gos implementation of chatgpt, and see why it's being used to mean a fairly different thing from human hallucinations.
If you're thinking about clicking the link to find out what AGI is, don't bother π
Artificial General Intelligence. Basically what most people think of when they hear AI compared to how its often used by computer scientists.
It stands for adjusted gross income. Ignore the AI wave. Do your taxes!
Iβll keep presenting this challenge until someone meets it:
Anyone who thinks LLMs arenβt generally intelligent, can you name a text processing task (ie text in, text out) than a general intelligence can do, that an LLM cannot?
Text in: a statement
Text out: confirmation whether statement is factually true or not
Is that something a human can do consistently?
If itβs not, does that imply a human does not possess general intelligence?
To be honest, even the human mind has this faculty not in all cases.
Logic. As an example, non textbook math questions. I asked ChatGPT 3.5 this:
Four friends (A, B, C and D) are standing in line. How many combinations are possible given that A and C cannot be next to eachother?
It answered 20, the correct answer is 12.
All possible conbinations
abcd,abdc,adbc,adcb,
cbad,cbda,cdba,cdab,
bcda,badc,
dcba,dabc
Its answer
To solve this, let's first consider the total number of combinations when there are no restrictions. Since there are 4 friends, there are 4! (4 factorial) ways to arrange them, which equals 24 combinations.
Now, let's count the number of combinations where A and C are next to each other. Since A and C can be arranged in 2 ways, and for each arrangement, the other two friends (B and D) can be arranged in 2! ways, the total number of combinations where A and C are next to each other is 2 * 2! = 4.
So, the number of combinations where A and C cannot be next to each other is the total number of combinations minus the number of combinations where A and C are next to each other:
24 - 4 = 20 combinations.
You can have it try again over and over, even while telling it the answer is 12, and it hallucinates basically random numbers to boot.
Just like ETH before staking
My question is: Imagine we would put all the data input of a certain task, eg. making a meal, into text fragments and send this "sense data"-pakets ( ^1^ to the AI, would the AI be able to cook if the teach the AI how to give output that controlls a robot arm?
If the answer of this question is yes, we already have a very usefull general tool. The LLM-AI will be able to controll and observe some situations. In the case that the answer is "no", I guess, it would have interesting implications.
^1^ : Remember, some part of AI are already able to tell what is on a given photo. Not 100%, but good enough for a meal maybe. In some cases, it woul task "provokant".
Put this drivel into an AI and tell it to rewrite it in a coherent way .
I am doubtfull of LLMs ability to preform tasks via a protocol layer as described . from my experience these models really struggle with understanding rules and preforming actions within a ruleset .
To experimentally confirm my suspicions, I created the following prompt :
collapsed
There is a robot arm placed over a countertop, which has the ability to pick up and manipulate objects. The countertop is split into eight cells.
Cell zero and cell one are stoves, both able to heat a pot or pan.
Cell two is an equipment drawer, holding pots, pans, bowls, cutting boards, knifes and spoons.
Cells three to five can accommodate one cutting board, pot, pan or bowl each.
Cell six is a sink, which can be used to wash ingredients or to fill pots with water.
Cell seven is an ingredient drawer, in which you can find carrots, potatoes and chicken breasts.
You can control the robot arm by with exclusively the following commands:
- "move left" and "move right" - moves the robot arm a single cell
- "take {item}" - takes item from the cell the robot arm is currently in
- "place" - places the item the robot arm is holding in the cell it is in
- "fill" - requires the robot arm to hold a pot or bowl and to be over the sink, fills the container with water
- "wash" - requires the robot arm to be over the sink, washes the currently held item
- "chop" - requires the robot arm to be over a cell with a cutting board and to be holding a knife, chops the ingredients on the cutting board
- "mix" - requires the robot arm to be over a cell with a bowl or pot and to be holding a spoon, mixes the ingredients in the bowl
- "empty" - requires the robot arm to be holding a pot, pan, bowl or cutting board, empties the item and places the content on the cell the robot arm is above
Note that the robot arm can only hold one item.
You are tasked with cooking a meal, please only output commands.
The robot arm starts over cell zero.
I have given this prompt to ChatGPT and it has failed in quite substantial ways . While I only have access to ChatGPT 3.5 , from my understanding of LLM architecture , it does not follow that increasing the size of the number or size of the layers will necessary let it overcome these issues , it does not seem to be able to understand the current state of the agent (picking up two objects at once , taking items from wrong cells etc)
THANKS
I'm glad, you know. Now we're talking about preparing for AGI, but if it's not imminent we also have some time to actually do it.
Seems like a skill issue