this post was submitted on 23 Jan 2024
15 points (100.0% liked)

Programming

13384 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

So. I was working with this code:

public function front(?string $sortBy,
        ?string $time,
        ?string $type,
        ?string $filter,
        string $federation,
        Request $request): Response
    {
        $user = $this->getUser();

        if (!$user)
        {
            $filter = 'all';
        }

        $criteria = new EntryPageView($this->getPageNb($request));
        $criteria->showSortOption($criteria->resolveSort($sortBy))    

... and so on; the function continues. VSCode told me there was a syntax error on the close brace on the if statement. I stared at it for a while, then pasted the code into GPT (not Copilot). Here's the conversation we had about it:


Me

What's the syntax error here?

(and I pasted the code)

GPT

The syntax error in your PHP function public function front seems to be due to the incorrect placement of the closing curly brace for the if statement. It appears that the if statement is not properly closed before the $criteria initialization begins. The corrected code should look like this:

public function front(?string $sortBy, ?string $time, ?string $type, ?string $filter, string $federation, Request $request): Response
{
    $user = $this->getUser();

    if (!$user) {
        $filter = 'all';
    } // Closing brace for the if statement

    $criteria = new EntryPageView($this->getPageNb($request));
    $criteria->showSortOption($criteria->resolveSort($sortBy));
    // ... rest of your function code ...
}

Ensure that each statement and block is properly closed and followed correctly by the next statement. This corrected structure should resolve the syntax error you are encountering.

Me

I don't get it. I put a closing curly brace right after the statement in mine... what am I missing?

GPT

I apologize for the confusion. Upon re-examining your code, it seems the placement of the curly brace is correct. If you're still encountering a syntax error, it may not be related to the if statement structure.

Let's look at other possible issues:


... and so on. Pasting GPT's recommended if-statement block, including the curly brace that was already there, fixed the error. I have since re-pasted the same thing I initially gave to GPT to VSCode, and it all works now, and asking the exact same question of GPT again now leads to random guesses which have nothing to do with the cursed curly brace that was previously causing problems for both VSCode and GPT.

I have given up and moved on at this point, content to say vaguely that maybe the unusual brace style, or pasting the incomplete function without semicolon or final close brace, managed to coincidentally confuse GPT on exactly the same line where VSCode had gotten confused for unexplained reasons. But the whole thing still has this Twilight-Zone oddness to me.

you are viewing a single comment's thread
view the rest of the comments
[–] TehPers 3 points 10 months ago (2 children)

GPT, at least from my limited understanding, is a tool designed to continue the input. You feed it a sequence of tokens, it returns a tokens which it "believes" come next. While your impression is valid, it's still a "completion engine". ChatGPT and other products use GPT but have built a product around it. They are not simply frontends for GPT - they do a lot more processing than that.

Also, not trying to understate your impression. It's pretty impressive how good it is, despite the compute needed for it. I would caution against overestimating its responses though. It does not "reason", "think", etc. Its purpose is to continue a sequence of tokens following a (super complex) pattern it has been trained on (super basically). When it claims to reason something, it's because the people it trained off of did reason it.

[–] jarfil 4 points 10 months ago* (last edited 10 months ago) (1 children)

Yes and no.

GPT started as a model for a completion engine... then it got enhanced with a self-reflection circuit, got trained on a vast amount of data, and added a "temperature" parameter so it can make tiny "mistakes" as compared to a simple continuation model, which allows it to do (limited, and random) free association of concepts.

It doesn't "think", but it does what can be seen as a single iteration of "reasoning" at a time. If you run it multiple times without resetting the context, you can make it actually "reason", step by step. Thanks to the degree of free association of concepts, this reasoning is not always the same as what it found in the training set, but actually what can be seen as "real" reasoning: associating concepts towards a goal. These are the "zero shot" responses that make LLMs so interesting:

https://en.m.wikipedia.org/wiki/Zero-shot_learning

TL;DR: I agree that it shouldn't be overestimated, but I don't think it should be underestimated either; it does "reason", a bit.

[–] Mad_Punda@feddit.de 1 points 10 months ago (1 children)

This was really interesting to read. Do you have some links where I can read more about what ChatGPT likely is and isn’t capable of?

[–] jarfil 1 points 10 months ago

Check out this:

ChatGPT: A 30 Year History | How Neural Networks Learned to Talk

https://youtube.com/watch?v=OFS90-FX6pg

It references several papers and explanations over the years. You can also check the papers themselves, or other explanations about particular elements.

[–] mozz@mbin.grits.dev 1 points 10 months ago

GPT runs on computer hardware which is just a tool to decide, "yes" and "no" equals "no" because they're not both true, remember "one," okay tell me back what the value was, okay "one." It's all just bits and mind-bogglingly simple transformations on bits. The simplicity of the computations that underlie it, doesn't translate into the complexity of what it can do at scale.

I fully agree with you that GPT can't actually reason, no matter how convincing the illusion is, but purely-token-shuffling tasks like translating between human languages, or analyzing code for purely-syntactical errors, is as much in its wheelhouse as arithmetic is to CPUs. Or it should be, anyway. Sometimes weird stuff happens.