Recent comments in /f/singularity

JacksCompleteLackOf t1_je389eh wrote

Actually, I think you're right and they did mention it. I guess I wish they would have emphasized that aspect more than the 'sparks of general intelligence'. It's mostly a solid paper for what it is. They admit they don't know what the training data looks like. I just wish they would have left that paragraph about the sparks out of it.

1

FoniksMunkee t1_je37buh wrote

I'm pretty sure they mentioned something like that in passing didn't they? I know they have a section in there talking about how it fails at some math and language problems because it can't plan ahead, and it can't make leaps of logic. And it considered these substantial problems with ChatGPT4 with no obvious fix.

4

lehcarfugu t1_je37965 wrote

it has yet to be seen if current language models will be capable of creating software as well as humans. due to training datasets being based on human text, it's limited in it's intelligence. it may require new breakthroughs or a different pathway to reach something equivalent to AGI. certainly this view is not uncommon among AI researchers, so maybe you should try yourself to be understanding of different views, rather than blindly believing LLMs will eclipse humans shortly

1

dieselreboot t1_je372im wrote

I think we're seeing the first signs of things accelerating exponentially through natural language coding tools that are descended from GPTs. Case in point would be OpenAI's codex, based on GPT3, which powers Github's Copilot and now Copilot X (GPT4?). Github Copilot is an AI 'pair programmer' that helps the coder write code. These tools are available as extensions in Integrated Development Environments (IDE's) that are used by developers worldwide.

I'm willing to wager that the developers at OpenAI, and the python/c library developers that GPT is reliant upon such as tensorflow and numpy, are using codex/copilot or vanilla chatgpt4. They'll be using these tools to help them write the next generation of GPTs or their dependencies.

As each new version of tensorflow, numpy, GPT, codex or copilot comes out, it would be interesting to see what percentage of the code-base has been written by an AI. Humans are in the loop for now, but their contributions will be getting smaller over time. As the software development and improvement process becomes more automated, the time between releases will contract.

codex/copilot is being used to write software. All coders will be using copilot or seeking other work. All software will have an every-increasing percentage that has been composed by an AI. And this includes the next version of the AIs and the libraries that they're dependent upon. This has the potential to 'take off' very quickly. Self-improving AI before AGI/ASI. I think the singularity has already begun to be honest - at the very least we're falling into it.

1

FoniksMunkee t1_je3705l wrote

Microsoft may have agreed. In the paper they released that talked about "sparks of AGI" - they identified a number of areas that LLM's fail at. Mostly forward planning and leaps of logic or Eureka moments. They actually pointed at LeCun's paper and said that's a potential solution... but that suggests they can't solve it yet with the ChatGPT approach.

3

FoniksMunkee t1_je36ntn wrote

If it really is accelerating exponentially, then most people cheering for this will be out on their arse with no job the next day.

And those that don't lose their job, will lose it shortly afterwards as the next exponential leap comes.

It's amazing tech, but we don't control it, corporations do, and what happens when corporations smell profit?

2

NVincarnate t1_je36cxc wrote

The same advice I've told people my whole life:

Work on your neuroplasticity, not your knowledge. What you know will become less and less important as technology evolves. How fast you learn to adapt to new circumstances is always far more important than what you think is a stable income or job now.

2

play_yr_part t1_je36aro wrote

No advice until they talk to me about it first. I will let them live the time it will take for them to see enough to want to talk to me about it in blissful ignorance. I do not have the skills to help them make hay in the meantime, and don't want to freak them out. If my SO carries on with her teaching course she'll find out soon enough anyway.

5

Ishynethetruth t1_je35nnh wrote

When covid hit and everyone around the world was in lockdowns it felt like everyone was on the same page dealing with the same problem , if singularity does happen, im trying to figure out the first outcome. How will the world 🌎 react. Is there a plan . Is everything going to crash are we going to be free or are we going to get the iPhone 16 and call it a day .

2