Recent comments in /f/singularity

flyblackbox t1_je0867n wrote

Wait, your second sentence invalidates your first. Most can’t access their 401k before 10 years. If your second sentence is true, why would it be stupid to cash out a 401k considering you are 30 years from retirement?

1

Shiningc t1_je07kk4 wrote

An AGI isn't just a collection of separate single-stance intelligences or narrow AIs. An AGI is a general intelligence, meaning that it's an intelligence that is capable of any kind of intelligences. It takes more than being just a collection of many. An AGI is capable of say, sentience, which is a type of an intelligence.

2

tatleoat t1_je075m7 wrote

Yeah it's a matter of time, I've said it many times before but here's how I see it playing out (warning schizoposting):

Constitutional AI gives people the opportunity to 'create' bosses with publicly available constitutions, or plain-english codes of behavior. There can be contractual obligations not to change the constitution for five years, workers get a vote, etc, the point is it can be deliberately programmed and open sourced for 1. Maximum trustworthiness, and 2. Maximum employee benefit, meaning all the money that would ordinarily be the CEOs would go to the working class.

Since AI CEOs have a built in advantage of being capable of being deliberately programmed not to be greedy, which there is HUGE incentive and ability to do so, that gives them huge advantages over human adversaries. Humans will be held back by their own greed because the money isn't efficiently being distributed through the company.

Human CEOs will be caught in a bind, the only way they can survive as CEOs is not to be greedy but if they aren't greedy that defeats the point. Any competing AI CEOs that try to be greedy will simply not be able to keep up, it's a huge advantage because it means for now they will be taking on some or all of the human workers displaced by human CEOs automation.

The only point of disruption is that human CEOs have a lot of assets, so even though AI CEOs will have the workers, will it matter if human CEOs are the ones with all the means of effective production, like factories? That's the only part I can't get my head around yet.

12

SteakTree t1_je0737m wrote

Definitely not at present. Right now the AI we have are just tools. They aren’t self directing. There is huge gap between a LLM and AGI. If we get to the point where AGI can self direct itself and explore its universe around it, running a megacorp for humans may not be top of the list. There would be so much significant change that it would be hard to say the corporate model would even remain valid.

2

Qumeric t1_je071bs wrote

nitpick: people sometimes misunderstand exponential growth in the following way: they think exponential means extremely fast. Actually, it is not necessarily the case, for example, computer performance was growing exponentially for almost 100 years now and is still arguably growing exponentially.

answer in spirit: GPT-4 and Codex are making many people who work on technologies much more productive.

29

MattAbrams t1_je07108 wrote

This isn't how science works. It's easy to say the machine works when you already have the papers you're looking for.

But this happens all the time in bitcoin trading, like I do. It can predict lots of things with high probability. They are all much more likely than things that make no sense. But just because they make sense doesn't mean that you have an easy way to actually choose which one is "correct."

If we ran this machine in year X, it would spit out a large number of papers in year Y, some of which may be correct, but there still needs to be a way to actually test all of them, which would take a huge amount of effort.

My guess is that there will never be an "automatic discoverer" that suddenly jumps 100x in an hour, because the testing process is long and the machines required to test become significantly more complicated in parallel to the abilities of the computer - look at the size increases of particle accelerators, for example.

1

qrayons t1_je06jmp wrote

I read Chollet's article since I have a lot of respect for him and read his book on machine learning in python several years ago.

His main argument seems to be that intelligence is dependent on its environment. That makes sense, but the environment for an AI is already way different than it is for humans. If I lived 80 years and read a book every day from the day I was born to the day I died, I'd have read less than 30k books. Compare that to GPT models which are able to read millions of books and even more text. And now that they're becoming multimodal, they'll be able to see more than we'll ever see in our lifetimes. I would say that's a drastically different environment, and one that could lead to an explosion in intelligence.

I'll grant that eventually even a self-improving AI could hit a limit, which would make the exponential curve to look more sigmoidal (and even Chollet mentioned near the end that improvement is often sigmoidal). However, we could still end up riding the steep part of the sigmoidal curve up until our knowledge has increased 1000 fold. I'd still call that a singularity event.

2

fluffy_assassins OP t1_je05ghj wrote

Well, I'm wondering if the shareholders will "overthrow" the CEOs if they see that having an AI in charge will actually get them more stock value, and therefore more power.

We could see executives vs. shareholders. Although, if the executives are the main shareholders, that could be an obstacle. Could be.

Imagine a CEO seeing they will make more money if they watch the AI, than if they actively manage. They'll remove themselves from the decision-making system to get more of that green.

Edit: this is kind of a guess. I'm wondering what CEOs will do to prevent this, as I don't feel they will go out without a fight.

I'm really curious with this question HOW the executive levels will counter superior AI. It will be interesting to see.

12

nowrebooting t1_je05eyi wrote

It’s like clockwork. “well, but we still need humans for X”, only for an AI to do “X” a few months later. At this point the only relevant discussion left is not IF an AI is going to surpass human intelligence soon, but HOW soon - …and whether people feel like this is a good or bad thing is up for discussion but doesn’t matter much in the end; anyone not already preparing for the coming AI revolution is going to experience a rude awakening.

2

MattAbrams t1_je055b1 wrote

Why does nobody here consider that five years from now, there will be all sorts of software (because that's what this is) that can do all sorts of things, and each of them will be better at certain things than others?

That's just what makes sense using basic computer science. A true AGI that can do "everything" would be horribly inefficient at any specific thing. That's why I'm starting to believe that people will eventually accept that the ideas they had for hundreds of years were wrong.

There are "superintelligent" programs all around us right now, and there will never be one that can do everything. There will be progress, but as we are seeing now, there are specific paradigms that are each best at doing specific things. The hope and fear around AI is partly based upon the erroneous belief that there is a specific technology that can do everything equally well.

2