Recent comments in /f/singularity

Professional_Copy587 t1_je0fk5l wrote

Yes it's very transformative technology. You cannot however leap from that to AGI and singularity. All you are doing is setting yourselves up for disappointment. The disparity between the thoughts of professionals and academics working on this, and the views of this sub are astounding. Yet everytime they are mentioned its passed off as being over cautious. Nobody is moving the goal posts except for the people on this sub.

5

1II1I11II1I1I111I1 t1_je0fjf3 wrote

Twitter (Takes a while to curate your feed, but you get the freshest information there, as well as quality informed content if you follow the right people i.e. academics and researchers)

r/singularity (the rest of Reddit is far too behind talking about AI; r/ChatGPT can have good content amongst all the garbage)

YouTube (AI Explained, Firecode, Robert Miles. Content is very quickly outdated though)

Less Wrong

Hacker News

I actually think people on HN are pretty informed on the rate of change in AI. The recent post about a 3D artist becoming disillusioned with their work after being 'replaced' with GPT had a lot of comments clearly discussing the immediate and massive impact AI will have on society.

7

1II1I11II1I1I111I1 t1_je0drvf wrote

Bruh...

The goalposts for AGI are continually moved by people who want to remain ignorant.

Transformative technology is literally already here. Within a year GPT-4 will be involved in most peoples' personal or professional lives. Now realise that the technology is only improving (faster than predicted)

Would anyone hire you over GPT-4? How about GPT-5? What about GPT-6 with internet access, and full access and memorization of your companies database.

19

Turingading t1_je0dk9w wrote

Publicly-traded companies are required to reduce costs however possible. Jobs that can now be automated will be automated. That is unavoidable.

The only way to avoid a widening of the wealth gap is to increase corporate taxation and expand social services.

Half the U.S. unilaterally opposes those changes, so for us social stratification by wealth will continue to solidify and the average citizen's suffering will increase.

It's possible that after a sufficient number of poor people have suffered there will be correction through legislation, but I predict it will be reactive rather than proactive.

10

iNstein t1_je0df7e wrote

It is about impact. For the first 95 years, it seems kinda slow. Then it appears to be sped up because the impact is so great. The graphic of a pond filling up is a great example. 1 drop, 2 drops, 4 drops not gonna interest anyone. When pond is 1/4 full and then doubles to half full and then doubles again to full, it suddenly looks fast as hell. It is the same with the singularity.

7

Professional_Copy587 t1_je0cx1q wrote

No. We don't even know how to build AGI.

The ridiculous thing is that in 9 months time the people on this sub deluding themselves in an echo chamber will be the same ones declaring an AGI winter because it hasnt met their own unrealistic expectations.

0

ptxtra t1_je0b8wc wrote

Everyone will think it's impossible, some small startup will do it while beating all the competition with a large margin, then big CEOs will scramble to integrate the benefits of AI management into their companies while keeping their positions. If it gets banned here, the chinese will do it.

19

Prymu t1_je0arxz wrote

Yeah, I was watching 2 minute papers' last video about gpt-4 and one he said THE line I thought (and even commented) that the "2 more papers down the line" has already passed. The video was (I think) mostly from the technical report, so with the "sparks of agi" and reflection or even alpaca, we have already passed 2 massive papers.

29

homezlice t1_je09zjv wrote

OK I'll bite. First off shareholders are not the ones who directly control appointing a CEO in publicly traded companies, that goes to the board generally. The board would need a human in charge of whatever AI oversaw a company for legal reasons alone. Because otherwise who would be liable for criminal wrongdoing, taxes, etc. Companies are formed from the ground up with assumption of humans in control. Even if shareholders decided they wanted an AI in charge it just could not happen, an S Corp requires humans in the loop, at the top.

Now, an AI for sure could be running the vast majority of the day to day operations. But for an AI to actually be CEO would require unending hundreds of years of law. I don't expect it to actually happen, instead CEOs will control AI and reduce human headcount below them. Bummer I know..and maybe that will then trigger bigger economic change. But the idea that we are going to jump right to AI being considered legally human is unbelievably farfetched and unlikely.

3

RealFrizzante t1_je09wve wrote

As many as schools of thought that maintain otherwise.

Anyways, a human has agency around what it wants to do, these AI we have now are nowhere near (and with its aproach never will be) to have its own agency.

AGI does have intentions, does have its own agency, even if it is subdued to human goodwill.

What we have now is really cool, useful and disruptive PROMPTS.

AGS: Artificial General Slaves

1

Gaudrix t1_je08kxu wrote

This technology makes tutors nearly obsolete. Only small improvements need to be made in reliability and consistency. It is already able to be configured to approach text from the perspective of a certain level of skill or education. GPT 4 won't remove that many jobs, but 5 will be able to fill in almost all of the gaps preventing that now.

6