BrickedKeyboard

joined 1 year ago
[–] BrickedKeyboard@awful.systems 1 points 1 year ago* (last edited 1 year ago) (14 children)

now if that isn’t just the adderall talking

Nail on the head. Especially on the internet/'tech bro' culture. All my leads at work also have such a, "extreme OCD" kinda attitude. Sorry if you feel offended emotionally, I didn't mean it.

The rest of your post is ironically very much something that Eliezer posits a superintelligence would be able to do. Or from the anime Death Note. I use a few words or phrases, you analyze the shit out of them and try to extract all the information you can and have concluded all this stuff like

opening gambit

“amongst friends”

hiding all sorts of opinions behind a borrowed language

guff about “discovering reality”

real demands as “getting with the right programme”,

allegedly, scoring points “off each other”

Off each other” was another weasel phrase

you know that at least at first blush you weren’t scoring points off anyone

See everything you wrote above is a possibly correct interpretation of what I wrote. It's like the english lit analysis after the author's dead. Eliezer posits a superintelligence could use this kind of analysis to convince operators with admin authority to break the rules, or L in death note uses this to almost catch the killer.

It's also all false in this case. (it's also why a superintelligence probably can't actually do this) I've been on the internet long enough to know it is almost impossible to convince someone of anything, unless they already were willing and you just link some facts they didn't know about. So my gambit actually something very different.

Do you know how you get people to answer a question on the internet? To post something that's wrong*. And it clearly worked, there's more discussion on this thread than this entire forum in several pages, maybe since it was created.

*ironically in this case I posted what I think is the correct answer but it disagrees with your ontology. If I wanted lesswrongers to comment on my post I would need a different OP.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago (1 children)

which is fine. the bigger topic is, could you leave a religion if the priest's powers were real*, even if the organization itself was questionable?

*real as in generally held to be real by all the major institutions in the world you are in. Most world governments and stock market investors are investing in AI, they believe they will get an ROI somehow.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago (2 children)

Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?

[–] BrickedKeyboard@awful.systems 1 points 1 year ago (16 children)

Next time it would be polite to answer the fucking question.

Sorry sir:

*I have to ask, on the matter of (2): why? * I think I answered this.

What’s being signified when you point to “boomer forums”? That’s an “among friends” usage: you’re free to denigrate the boomer fora here. And > then once again you don’t know yet if this is one of those “boomer forums”, or you wouldn’t have to ask.

What people in their droves are now desperate to ask, I will ask too: which is it dummy? Take the stopper out of your speech hole and tell us how > you really feel.

I am not sure what you are asking here, sir. It's well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it's domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.

[–] BrickedKeyboard@awful.systems 5 points 1 year ago (1 children)

Primary myoblasts double on average every 4 days! So if given infinite nutrients, and you started with 1 gram of meat, it would take .... 369 days to equal the mass of earth!

[–] BrickedKeyboard@awful.systems 9 points 1 year ago (3 children)

Doesn't the futurism/hopium idea of building an ideal city go back to Disney? Who does more or less have feudal stronghold rights over florida?

https://en.wikipedia.org/wiki/EPCOT_(concept)

Because of these two modes of transportation, residents of EPCOT would not need cars. If a resident owned a car, it would be used "only for weekend pleasure trips."[citation needed] The streets for cars would be kept separate from the main pedestrian areas. The main roads for both cars and supply trucks would travel underneath the city core, eliminating the risk of pedestrian accidents. This was also based on the concept that Walt Disney devised for Disneyland. He did not want his guests to see behind-the-scenes activity, such as supply trucks delivering goods to the city. Like the Magic Kingdom in Walt Disney World, all supplies are discreetly delivered via tunnels.

Or The Line in Saudi Arabia.

Definely Sneer-worthy, though it's sometimes worked. Napoleon redesigned Paris, which is probably a good thing. But they are stuck with that design to this day, which is probably bad.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago* (last edited 1 year ago) (4 children)

The major take is: We spell it differently.

I am too dumb/autistic to know what you're conveying here.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago (1 children)

The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.

Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by 'what would a human do', this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago* (last edited 1 year ago) (1 children)

Did this happen with Amazon? The VC money is a catalyst. It's advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.

OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it's already worth more than 10 billion intrinsic value.

If they can't - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.

https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/

I agree completely. This is exactly where I break with Eliezer's model. Yes obviously an AI system that can self improve can only do so until it's either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

That's not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then...

Genuinely asking, I don't think it's "religion" to suggest that a huge speedup in global GDP would be a dramatic event.

[–] BrickedKeyboard@awful.systems 1 points 1 year ago (1 children)

Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won't be in a week or a month, energy requirements alone limit how fast it can happen.

Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you "priced in" this possibility in your world view?

 

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

Here's my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

view more: next ›