sisyphean

joined 1 year ago
MODERATOR OF
 

Intelligence explosion arguments don’t require Platonism. They just require intelligence to exist in the normal fuzzy way that all concepts exist.

4
submitted 1 year ago* (last edited 1 year ago) by sisyphean@programming.dev to c/auai@programming.dev
 

At OpenAI, protecting user data is fundamental to our mission. We do not train our models on inputs and outputs through our API.

 

We’re rolling out custom instructions to give you more control over how ChatGPT responds. Set your preferences, and ChatGPT will keep them in mind for all future conversations.

@AutoTLDR

 

GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on four diverse tasks: 1) solving math problems, 2) answering sensitive/dangerous questions, 3) generating code and 4) visual reasoning. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6%) but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4%). Interestingly GPT-3.5 (June 2023) was much better than GPT-3.5 (March 2023) in this task. GPT-4 was less willing to answer sensitive questions in June than in March, and both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. Overall, our findings shows that the behavior of the “same” LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality.

 

Introducing Llama 2 - The next generation of our open source large language model. Llama 2 is available for free for research and commercial use.

This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.

@AutoTLDR

 

16 Mar, 2023

Kagi Search is pleased to announce the introduction of three AI features into our product offering.

We’d like to discuss how we see AI’s role in search, what are the challenges and our AI integration philosophy. Finally, we will be going over the features we are launching today.

@AutoTLDR

 

This is a game that tests your ability to predict ("forecast") how well GPT-4 will perform at various types of questions. (In caase you've been living under a rock these last few months, GPT-4 is a state-of-the-art "AI" language model that can solve all kinds of tasks.)

Many people speak very confidently about what capabilities large language models do and do not have (and sometimes even could or could never have). I get the impression that most people who make such claims don't even know what current models can do. So: put yourself to the test.

[–] sisyphean@programming.dev 1 points 1 year ago

I would be happy to, but all current local models are vastly inferior to GPT-3.5. The unfortunate reality is that if you want to create anything high quality, you must use the OpenAI API.

 

Increasingly powerful AI systems are being released at an increasingly rapid pace. This week saw the debut of Claude 2, likely the second most capable AI system available to the public. The week before, Open AI released Code Interpreter, the most sophisticated mode of AI yet available. The week before that, some AIs got the ability to see images.

And yet not a single AI lab seems to have provided any user documentation. Instead, the only user guides out there appear to be Twitter influencer threads. Documentation-by-rumor is a weird choice for organizations claiming to be concerned about proper use of their technologies, but here we are.

@AutoTLDR

 

TL;DR: (by GPT-4 🤖)

The article by Chandler Kilpatrick on Medium discusses the new Code Interpreter feature of ChatGPT, which has been released to Beta from its previous Alpha testing phase. The Code Interpreter enhances ChatGPT's ability to process, generate, manipulate, and run code, currently supporting only Python. Users can upload files (with a limit of 100 MB per file) for the AI to interact with, although it cannot edit files directly. The Code Interpreter can be used in various fields such as software development, data analytics, documentation, and education, helping with tasks like code generation, error detection, code refactoring, creating data visualizations, and providing real-time programming tutoring. The article also highlights some impressive feats accomplished by users, including recreating the game Flappy Bird in less than 10 minutes.

 

LLM is my command-line utility and Python library for working with large language models such as GPT-4. I just released version 0.5 with a huge new feature: you can now install plugins that add support for additional models to the tool, including models that can run on your own hardware.

@AutoTLDR

 

An AI-first notebook, grounded in your own documents, designed to help you gain insights faster.

@AutoTLDR

[–] sisyphean@programming.dev 1 points 1 year ago

It isn’t available outside the US and the UK, so I can’t try it yet, but I will as soon as I get access.

[–] sisyphean@programming.dev 1 points 1 year ago (1 children)
[–] sisyphean@programming.dev 3 points 1 year ago* (last edited 1 year ago)

Ethan Mollick has two recent articles related to this topic:

[–] sisyphean@programming.dev 0 points 1 year ago (1 children)
[–] sisyphean@programming.dev 1 points 1 year ago (1 children)
[–] sisyphean@programming.dev 2 points 1 year ago

AFAIK it doesn’t. But I haven’t got access to it yet, so this is based on tweets by some people who have.

[–] sisyphean@programming.dev 3 points 1 year ago (1 children)

Everyone does cringey things sometimes but it takes a great person to admit their mistakes

[–] sisyphean@programming.dev 1 points 1 year ago* (last edited 1 year ago)

I've just tried the version of GPT-4 available via the API. There has been a lot of talk about GPT-4 being "nerfed" since the May update, and I wanted to see if I could see an obvious difference. I ran the same prompt in both the web-based ChatGPT with GPT-4 selected, and a self-hosted chat UI that uses the API. The API call used a temperature value of 0.7.

Here is the prompt I usedI use Nginx as a reverse proxy on my Ubuntu server. There is a web app running on localhost:3456, and I want to expose it to the internet using the hostname chat.botond.online. I also want to protect it using HTTP Basic authentication. I have the Apache tools for managing htpasswd files installed.

Could you please write an Nginx config file for this site, and also explain how I can create a htpasswd file and use it in this nginx config to protect it?

Here is ChatGPT's responseSure, I can certainly help with that. We will perform these tasks in two steps:

  1. Create an htpasswd file
  2. Write an Nginx config file

Let's start with the first one.

Step 1: Create an htpasswd file

You can create a .htpasswd file by using the htpasswd command that comes with the Apache httpd server. In your case, you can create a user (let's say user1) like this:

sudo htpasswd -c /etc/nginx/.htpasswd user1

This will prompt you to enter and confirm a password for the user user1. The -c option tells htpasswd to create a new file.

Please note: This will overwrite the file if it already exists. If you want to add additional users to an existing .htpasswd file, do not use the -c flag.

Step 2: Write an Nginx config file

Now, let's create an Nginx configuration. You can add a new server block in your Nginx config, or create a new config file under /etc/nginx/sites-available/ and symlink it to /etc/nginx/sites-enabled/. I'll create a new server block as an example:

server {
    listen 80;
    server_name something.somewhere.com;

    location / {
        auth_basic "Restricted Content";  # This is the message that will be shown in the login prompt
        auth_basic_user_file /etc/nginx/.htpasswd;  # The file you created earlier

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_pass http://localhost:3456;
    }
}

This will set up basic HTTP authentication on your site and also reverse proxy requests to your web app running on localhost:3456.

Note: If you're planning to expose this to the internet, I would strongly recommend setting up SSL as well. You can use a free service like Let's Encrypt to get an SSL certificate. If you don't secure your site with HTTPS, the HTTP Basic authentication credentials will be sent in plain text, which is a significant security risk.

Here is the response to the same prompt using the APISure, let's start by creating the htpasswd file for HTTP basic authentication. You can create a .htpasswd file using htpasswd command that comes with the apache2-utils package. If it's not installed, you can install it via apt-get install apache2-utils.

To create a .htpasswd file, run the following command:

htpasswd -c /etc/nginx/.htpasswd username

Replace username with the username you want. After running this command, you will be prompted to enter and confirm a password for this user. The -c flag is used to create a new file. Do not use this flag if you already have a .htpasswd file and just want to add a new user.

Now, let's create a Nginx configuration file. Using your favorite text editor (nano, vim, etc.), open up a new configuration file in /etc/nginx/sites-available/, for example:

sudo nano /etc/nginx/sites-available/something.somewhere.com

Then, paste the following configuration:

server {
    listen 80;
    server_name something.somewhere.com;

    location / {
        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_pass http://localhost:3456;
    }
}

In this configuration:

  • listen 80; means Nginx is listening on port 80 for incoming connections.
  • server_name something.somewhere.com; specifies the hostname.
  • auth_basic "Restricted Content"; enables HTTP basic authentication with the message "Restricted Content".
  • auth_basic_user_file /etc/nginx/.htpasswd; specifies the location of the .htpasswd file.
  • The proxy_set_header directives are used to pass on some client information to the proxied server.
  • proxy_pass http://localhost:3456; is used to forward requests to your web app running at localhost:3456.

Save and close the file.

To enable this site, create a symbolic link of it in the sites-enabled directory:

sudo ln -s /etc/nginx/sites-available/something.somewhere.com /etc/nginx/sites-enabled/

Finally, test the configuration and restart Nginx:

sudo nginx -t
sudo systemctl restart nginx

Now your web app should be accessible from the internet at http://something.somewhere.com, and it should ask for a username and password as you specified in the .htpasswd file.

There is a marked difference between the two responses but as far as I can see, both are correct and high-quality. ChatGPT is less verbose, the API generates lots of detail. I prefer the API's answer slightly.

I will probably try the same experiment with much more complex problems (if anyone has a suggestion for a good test case, please tell me in a comment), but on this simple problem, they both performed very well.

[–] sisyphean@programming.dev 6 points 1 year ago* (last edited 1 year ago) (1 children)

I’ve been waiting for Code Interpreter for a long time. I’m very excited to finally be able to try it. Some interesting examples here:

[–] sisyphean@programming.dev 1 points 1 year ago* (last edited 1 year ago)

TL;DR: (human-generated 👱‍♂️)

  • GPT-4 API access for all current paying customers
  • New instruction model: gpt-3.5-turbo-instruct
  • Deprecation of the Edits API
  • Deprecation of all older embedding models in favor of text-embedding-ada-002, but don’t worry:

We recognize this is a significant change for developers using those older models. Winding down these models is not a decision we are making lightly. We will cover the financial cost of users re-embedding content with these new models. We will be in touch with impacted users over the coming days.

[–] sisyphean@programming.dev 2 points 1 year ago

Ok, this is an uncharacteristically bad summary, AutoTLDR. Bad bot!

view more: next ›