Surur

Surur t1_j9ntwmv wrote

> I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.

The training is resource intensive. The running is not, which is demonstrated by ChatGPT being able to support millions of users concurrently.

Even if you need a $3000 GPU to run it, that's a trivial cost for the help it can provide.

3

Surur t1_j9ezsf8 wrote

I think emotion is just a bias that influences decision making. An AI will presumably be able to make decisions more precisely than that, though in our messy world having such shortcuts may actually be better and more efficient than keeping a full list of someone's previous history in your "context window".

−2

Surur t1_j9aqnvt wrote

> a toilet can respond to external stimulus, remove water when you press the lever and add water until it senses it is full, I am pretty confident it is not conscious.

It i conscious of whether you pressed the lever or not.

You seem to be missing the point which is that there is a spectrum of consciousness, and the richer it is, the more conscious the being is.

0

Surur t1_j9aj4ck wrote

This is exactly the mambo jambo I was talking about that people invent to separate themselves machines and animals.

The simple fact is that at its most basic, consciousness means being able to perceive and respond to external stimuli.

It's merely because of all the nonsense you add that you can claim supremacy over a simple car.

1

Surur t1_j96y9q7 wrote

That is actually not the definition.

conscious

noun 1. the state of being aware of and responsive to one's surroundings. "she failed to regain consciousness and died two days later"

a person's awareness or perception of something. "her acute consciousness of Luke's presence"

Now you can add all kinds of mumbo jumbo magic but that's not the definition.

1

Surur t1_j8yo2ed wrote

He's right though, as some-one else said recently - there is only 1 safe solution and millions of ways to F it up.

The main consolation is that we are going to die in any case, AI or no AI, so an aligned ASI actually gives us a chance to escape that.

So my suggestion is to tell him he cant get any more dead than he will be in 70 years in any case, so he might as well bet on immortality.

6

Surur t1_j8wmbvu wrote

Reply to comment by Snipgan in Is chatGPT actually an AI? by Snipgan

Yes, but that is also reasonable, since chatGPT is so accomplished.

But it does have to tick all the boxes, and chatGPT cant learn anything new for example, and its reasoning capabilities are pretty good, but still flawed, with basic logic errors some times.

2

Surur t1_j8wh1fj wrote

Reply to comment by Snipgan in Is chatGPT actually an AI? by Snipgan

> So, if it is complexity that determines if it is an AI, what is the threshold for it being complex enough?

A reasonable question. I am sure you have purchased some home appliances with the AI label that simply chooses the right wash program based on some sensors, and the developers call that AI, so it's just a label really.

The question is not whether ChatGPT is AI, it's where it is an AGI, and for that, it will need to fulfil a variety of criteria, those being able to reason, problem-solve, learn and plan at the same level as a human in a broad range of areas.

Clearly ChatGPT can not do that yet, so it's not an AGI.

It can however be envisioned that these capabilities can be developed, and future LLM with the right capabilities would meet the criteria for AGI.

2

Surur t1_j8wes6i wrote

Reply to comment by Snipgan in Is chatGPT actually an AI? by Snipgan

You are oversimplifying.

A calculator can not accurately predict a complex pattern. The more complex the pattern the more complex the algorithm would need to be, and that complexity is what we call intelligence.

Think it through carefully - surely you would need to be very intelligent to generate coherent and on-topic text.

1

Surur t1_j8w68z6 wrote

ChatGPT is an AI like every other AI currently in use. Is it an AGI - definitely not.

How its trained is simple, but the result is obviously very sophisticated - it takes a huge amount of intelligence to accurately predict the next word in a sensible and on-topic way.

4

Surur t1_j8vx5w4 wrote

Google has already done that and it works really well, but a bit slowly. There is no reason the technology can not improve with time.

https://say-can.github.io/

https://youtu.be/ysFav0b472w

I think this idea is new and pretty cool however.

> Without getting into details like neural networks, transformer, and whatnot,** I figure we can use the same tech to be able to predict the next physical movement a robot does.** So if you were to construct a robot that looks like a human and has the same abilities, i.e it can rotate and extend its limbs the same way, then given enough data it could learn to move like a human the same way ChatGPT can talk like a human.

> The input for this would be a video footage and software that can identify limb movements. An easy way start would be to tape a factory line where human workers do some kind of repetitive movements. Next thing you know, we could have robots doing dishes and mopping the floor! Add ChatGPT-like abilities and it will be able to talk as well.

It would be like physical intelligence.

1

Surur t1_j8o203a wrote

> Death gives life meaning.

There is a theory that people only say this because they know they will die, and if they actually had the option of immortality, they would grab it with both hands and feet.

The truth is that life has no meaning, and you are just here to enjoy the ride. If you enjoy the ride you may want to stay on a bit longer.

> Immortality is infinite suffering

You always have the option of checking out.

30