Recent comments in /f/singularity

D_Ethan_Bones t1_je2q0o2 wrote

Friends gone and family mostly gone, told my mother about Stable Diffusion with a quick cheerful text and otherwise left it.

When people on the internet are either extremely optimistic or extremely pessimistic I leave them be because I don't kick hornets' nests without a reason. When people are somewhere in between I advise against extremes.

We don't agree on what is AGI so there's likely either an imminent AGI followed by ASI not long after or there's AGI not long after followed immediately by ASI.

Some people routinely ascribe completely superhuman abilities outside of processing and memory to AGI, which places ASI roughly around 'the morning after.' Some people call AGI simply a digital humanoid which places ASI maybe a few years later or a few decades if we hit a plateau and struggle to get higher.

I'm expecting there to be a humanity barrier which takes massive human efforts to cross, before which point the AI and the world around it remain stuck at a level vastly above 2010 technology but vastly below 2030~2050 technology.

2

suicidemeteor t1_je2p2rw wrote

I'm a CS major, first year currently, and I'm of the opinion that programming will be one of the last major jobs to be fully automated.

This is for the simple fact that once an AI can code as proficiently as humans it will rapidly be able to iterate upon itself in an extreme manner that will functionally destroy all intellectual work.

I'm planning my life as though the singularity won't happen because for me it's frankly irrelevant. If it does happen then I'll sit back and watch the fireworks. I'll likely be out of a job, along with every other intellectual worker. While some workers might remain (particularly welders, mechanics, plumbers) I doubt those fields would be in any way recognizable.

Trades would be de-skilled to a frankly ridiculous degree. All it takes is a go-pro, an earbud, and a super intelligent AI (plus maybe a week or two of training) and you can turn just about anyone into a "good enough" tradesman. The gaps in knowledge, experience, and safety can be filled in by having a super intelligent manager looking over your shoulder. So in other words deciding to go into something like welding is irrelevant when those fields would be unrecognizable.

1

Bismar7 t1_je2oa5n wrote

People are surprisingly foolish about this subject.

AI will make us more efficient it won't replace us.

When it gets to a point where we can augment our minds with it, we will, synthesis is likely the pentacle moment.

In the meantime, people, programmers, will be able to do more in less time. Demand for digital goods will keep up with the design of them.

1

whateverathrowaway00 t1_je2o8px wrote

There’s also a middle ground you’re missing.

People who think you might be right but also find it equally likely that there’s a middle ground and we’ll keep working until that happens.

It’s distinctly possible an AI will put me out of a job in which case I’ll sell this house, move into a shitty rental, and borrow from fam to go back to school. Probably trade school as I suspect that’s what I should’ve done years back instead of doing the CCNA->NetEng route.

That said, I think it’s equally likely that the situation is somewhere between the total doom on one side and the “this will change nothing” head in the sand on the other side.

Like, what do you propose we do?

Many of the devs you’re sneering at are probably secretly looking forward to it so they can finally walk into the woods and not hear that fucking slack wood knock sound anyways.

1

ExtraFun4319 t1_je2nmcu wrote

>There is some serious cope going on in programming subs

There's cope going on this sub, too. "AGI 2023!" is clearly cope to me, cope that comes from people who desperately want AI to rescue them ASAP.

And the fact that no serious AI scientist (or any AI scientist) believes such a thing (AFAIK) only bolsters my view.

3

D_Ethan_Bones t1_je2nk1g wrote

>Why did it take so long to get here when we had exponential growth 60 years ago?

60 years ago there was TV and the cold war.

60 years prior the automobile was the latest greatest invention and radio had not reached the point of entertainment broadcast. https://en.wikipedia.org/wiki/History_of_radio#Broadcasting

60 years further back, agriculture in the southern United States still involved chattel slavery.

1

IndependenceRound453 t1_je2n1ak wrote

Why does this subreddit seem to only attract people who believe we'll be out of work next Tuesday?

I frequent other TECH subreddits and other TECH forums/websites on the internet, but this is the only one that I visit where the overwhelming majority believe "AI-induced job apocalypse very soon". Those other communities that I am a part of have more balanced and grounded/realistic debates/takes about the future of work and AI.

3

Sigma_Atheist t1_je2n0zv wrote

Amazing. In the very first sentences, you can tell that the author knows nothing about quantum computing:

"Recently, IBM and the Cleveland Clinic unveiled a quantum computer that could advance medical innovation like never before. The IBM Quantum System One was created to crunch large amounts of data at high speeds."

Senseless hype.

Edit: They didn't even report on the qubit count!

34

Readityesterday2 t1_je2mpec wrote

Don’t underestimate the infallible human mind. It’s ability to lie to itself. To evaluate the world with distorted lens. To reflect with perpetual bias. To be obsessed with self-righteous than truth-discovery.

Our broken thinking will fuck us over more than ai could.

What’s funny is it doesn’t matter how eductad the mind is. Technologists to data scientists, I have seen lie to themselves and not blink at the recognition how faulty their thinking was.

It’s all a learning lesson for the few of us who cautiously guard our thinking apparatus. Gimme me a like if you are one of them.

1

datalord t1_je2mkh9 wrote

The “Sparks of AGI” paper mentioned above is literally published by Microsoft who researched it alongside OpenAI.

This paper, published yesterday, is published by OpenAI themselves discusses just how many people will be impacted. His Twitter post summarises it well.

Sam Altman recently spoke to Lex around the power and limits of what they have built. They also discuss AGI. Suffice to say, those working on it are talking about it at length.

5