Recent comments in /f/singularity

flyblackbox t1_je1p1b9 wrote

Just to add my two cents, the article you cited by Kurzweil has had a bigger impact on my world view than anything else I’ve read before or since. I read it in 2003 and still I’m convinced it is a sound theory.

I’m curious if anyone else who read it has began to have doubts since?

6

Wapow217 t1_je1nic0 wrote

It just showed up because I didn't know this was the plug-in part i was waiting for I was expecting something different I guess. I saw it last night on my account for the first time and was just messing with some old homework to see if it got better with code, and it did. It basically just wrote and ran its program within the chat. I think it makes it much cleaner too. Now I will have to play with more of the actual plug-in part.

Edit: My bad, this is different from what I just got yesterday. It was just code interpretation that lets you download now. My bad.

5

Villad_rock t1_je1mg84 wrote

A lot of money is in software, most startups as well as companies with high market caps are in tech, there aren’t even enough programmers for the demand. Ai will make them much more productive.

It doesn’t need to affect every industry immediately to produce a lot of money and competition.

Adobe, google, unity and soon many more come with their own ai products to be not left behind.

The manufacturing and transport industry etc isn’t even really important to accelerate towards agi because computer scientists and programmers are the biggest contributors which are in the tech industry.

We also don’t need asi or replicators just robots who dig everything up, manufacture and transport it which needs agi. At that point there will be no real economy anymore.

Really weird how you talk about money in an age of agi/asi.

1

Belostoma t1_je1m50i wrote

It's not happening yet. There's accelerating growth due to increased interest and understanding from humans seeing what this stuff can do, but the exponential growth associated with a true singularity will come from the AI being capable of improving itself much better than humans can. The AI improves itself, gets better at improving itself due to the improvements, improves itself even more, and so on recursively.

The capability of AI in computer programming right now is impressive, but it's not at the level of understanding really complex programs (like itself) well enough to debug them, let alone reason about how to improve them. AI is scary good at one-off programming puzzles that are easily to fully and briefly specify, but that's a very different task from understanding how all the parts of a large, complex program work together and coming up with novel ideas to rearrange them to serve some over-arching goal.

I think some of the recursive self-improvement will begin with some combination of human and machine intelligence, but right now the AI is really just a time-saver to make human coders somewhat more efficient, rather than something that greatly expands their capabilities.

2

-I-D-G-A-F- t1_je1kznl wrote

https://en.m.wikipedia.org/wiki/Attention_schema_theory

I’d recommend reading about this, and possibly reading Graziano’s book “rethinking consciousness”

Attention is something that all AI seems to currently lack. They just wait for an input and provide an output. Attention generates a simplified model of both the external and internal world.

“The AST can be summarized in three broad points.[1] First, the brain is an information-processing device. Second, it has a capacity to focus its processing resources more on some signals than on others. That focus may be on select, incoming sensory signals, or it may be on internal information such as specific, recalled memories. That ability to process select information in a focused manner is sometimes called attention. Third, the brain not only uses the process of attention, but it also builds a set of information, or a representation, descriptive of attention. That representation, or internal model, is the attention schema.

In the theory, the attention schema provides the requisite information that allows the machine to make claims about consciousness. When the machine claims to be conscious of thing X – when it claims that it has a subjective awareness, or a mental possession, of thing X – the machine is using higher cognition to access an attention schema, and reporting the information therein.”

Idk how to make a quote on reddit.

3

dontpet t1_je1jqjg wrote

Humans are vulnerable to thinking they are living in special times. Eschatology is the word for it.

I'm old and was raised on the dreams of scifi in the 60s. I've always thought we might get to a singularity one day, possibly in my lifetime, but I've never thought we are on the cusp of it until now.

8

Emory_C t1_je1ipi9 wrote

>Half the decisions made at large organizations are wrong. There’s plenty of literature on c-suite decision-failures. This means there is room for improvement

GPT-4 would be very prone to hallucinate "wrong" answers as well.

Stockholders want somebody to be able to fire.

3

AsuhoChinami t1_je1hhn6 wrote

Well, I don't want to be rude to someone that has been polite to me, it's just... this "We're currently in an AI winter" is about as credible as a Flat Earth argument. I don't really see the need to dig deeply into a viewpoint that's transparently ludicrous, any more than I would research Moon landing conspiracy theories or watch an hour long video about why Barack Obama was assassinated in 2011 and replaced by a robot that was created by the child of John Wayne Gacy and Abraham Lincoln.

5