Recent comments in /f/singularity

greatdrams23 t1_je2lsbg wrote

I understand perfectly well exponential growth. We've had it for the last 60 years, but it took 60 years to get this far.

Why did it take so long to get here when we had exponential growth 60 years ago?

Ans: because exponential growth still takes time!

Let's say we need another 1000000 times the computer power that we have now. How long will that take?

1

maskedpaki t1_je2lgig wrote

Ilya sutskever literally believes that next word prediction is general purpose so you are just wrong on this.

The only thing he is unsure about is if something more efficient than next token prediction gets us there first. It's hard to defend Gary marcus' view that gpt isn't forming real internal representations since we can see that gpt4 so obviously is.

1

confused_vanilla t1_je2ker5 wrote

I'm also in the field and have noticed the same thing. The fact that all of my friends and family don't see the implications makes me feel like I'm going crazy or something. It may not be as fast as I think it will be, but I really don't see how it doesn't replace us very soon. I'm sure it will also be able to do the non-coding aspects just as easily as it does the coding.

4

SkyeandJett t1_je2j250 wrote

Yeah I'm in an adjacent field (FPGA) and I'm not sure what they're talking about. The "non-code" parts are even EASIER for AI. For instance I just finished a requirements sprint. Half a dozen engineers over months for something that could probably have been done by GPT-4 with a DOORS plugin in an afternoon. We'd have a review to validate its work but that's still a MAJOR disruption in manpower need.

I think there's a big disconnect between how it works right now versus how it will work (or even can now if you set it up right). The next version will do it more or less how we do it. It's not a 0 shot approach. It'll write the code, compile, fix warnings and errors, and then write unit tests to validate the code and hand you the work package.

3

MNFuturist t1_je2isvz wrote

I've been a professional futurist for 10+ years helping my clients with emerging tech and trends, and the one constant across industries has been "... but it could never do my job." I get it though, if you spent your whole career getting really good at something, respected by your peers, earning a good living, etc., it's really difficult to accept that it could suddenly be automated (or even partially automated.) We're about to see a lot more of this in many areas where people felt "safe" and like they had a long time to adapt and now they don't. It's going to be rough. (Btw, I have no illusions that my career as a keynote speaker is safe.)

6

Crulefuture t1_je2ijzq wrote

I think it's rather optimistic to think we'll have the tech necessary to make all programmers obsolete in only three years. I think it's more likely that AI could largely wipe out junior/entry-level positions in that time though, and it sounds less crazy or far-fetched.

1

SgathTriallair t1_je2i68t wrote

While I agree that we will have super human AI soon, the fact that a lot of expert groups disagree is evidence against that I idea and shouldn't be discarded without reason.

I do think they are incorrect but it's important to not get high on your own supply and decide that everyone who disagrees with you is wrong solely because they disagree with you.

3