Surur

Surur t1_j72ink8 wrote

> Seeing them IS the visceral experience I'm talking about.

I thought you said adding vision won't make a difference? Now seeing is a visceral experience?

> All of this interaction, including the abstract thoughts of it (because thinking itself is cellular activity, neurons are signaling each other to trigger broader associations formed from the total chain of cellular activity those thoughts engaged), together form the "visceral experience."

You are stretching very far now. So thinking is a visceral experience? So AI can now also have visceral experiences?

> "Superhuman performance" is on specific BENCHMARKS only.

The point of a benchmark is to measure things. I am not sure what you are implying. Are you saying it is not super-human in the real world? Who do you think reads the scrawled addresses on your envelopes?

> And please try to give me an abstract concept you think doesn't have any experiences tied to your understanding of it.

Everything you think you know about cells are just things you have been taught. Every single thing. DNA, cell vision, the cytoskeleton, neuro-transmitters, rods and cones etc etc.

1

Surur t1_j72cx9j wrote

> But we CAN see cells. We made microscopes to see them.

That is far from the same. You have no visceral experience of cells. Your experience of cells is about the same as a LLM.

> This is the difficulty we face in science right now, the world of the very small and the very large is out of our reach and we have to make a lot of indirect assumptions that we back with other forms of evidence.

Yes, exactly, which is where your theory breaks down.

The truth is we are actually pretty good at conceptualizing things we can not see or hear or touch. A visceral experience is not a prerequisite for intelligence.

> I am trying to argue here is that “intelligence” is complex enough to be inseparable from the physical processes that give rise to it.

But I see you have also made another argument - that cells are very complex machines which are needed for real intelligence.

Can I ask what do you consider is intelligence? Because computers are super-human when it comes to describing a scene or reading handwriting or understanding the spoken word or being able to play strategy games or a wide variety of things which are considered intelligent. The only issue so far is bringing them all together, but this seems to be only a question of time.

1

Surur t1_j720v0s wrote

I already made a long list.

Lets take cells. Cells are invisible to the naked eye, and humans only learnt about them in the 1700's.

Yet you have a very wide range of ideas about cells, none of which are connected to anything you can observe with your senses. Cells are a purely intellectual idea.

You may be able to draw up some metaphor, but it will be weak and non-explanatory.

You need to admit you can think of things without any connection to the physical world and physical experiences. Just like an AI.

2

Surur t1_j71a7p0 wrote

> And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday.

This is not true. AFAIK it has a 96 layer neural network with billions of parameters.

1

Surur t1_j70yxaq wrote

I think its very ironic when you talk about grounded visceral experiences when much of what you are talking about are just concepts. Things like cells. Things like photons. Things like neural networks. Things like molecules and neuro-transmitters.

You need to face the fact that much of the modern world and your understanding of it depended nothing on what you learnt as a baby when you learnt to walk, and a lot of things you know in an abstract space just like neural networks.

I asked Chatgpt to summarise your argument:

> The author argues that artificial intelligence is unlikely to be achieved as intelligence is complex and inseparable from the physical processes that give rise to it. They believe that the current paradigm of AI, including large language models and neural networks, is flawed as it is based on a misunderstanding of the symbol grounding problem and lacks a base case for meaning. They argue that our minds are grounded in experience and understanding reality and that our brains are real physical objects undergoing complex biochemistry and interactions with the environment. The author suggests that perception and cognition are inseparable and there is no need for models in the brain.

As mentioned above, you have never experienced wavelengths and fusion - these are just word clouds in your head you were taught by words and pictures and videos, a process which is well emulated by LLM, so your argument that intelligence needs grounded perception is obviously flawed.

Structured symbolic thinking is something AI still lack, much like many, many humans, but people are working on it.

1

Surur t1_j6xkp88 wrote

So the OP's question was:

> Do you think that we'll relinquish control of our infrastructure including farming, energy, weapons etc?

To which I said yes. The reason is because AI will be more efficient than us at running it, which will lead market forces to make us relinquish control to AI, or be out-competed by those who already did.

If things went south at a power station, only very few people can respond, and in all likelihood they will no longer be there as they have not been needed for some time.

Practically speaking - you may want an AI to balance a national grid to optimise the use of variable renewable energy.

Such an AI will not be under human control, as it will have to act quickly.

So just like that we have lost control, and if the AI wants to bring down the grid there is nothing we can do about it.

1

Surur t1_j6wfend wrote

When you say "the future" do you mean 2023-2026, because these AI tools will continue to improve, so we cant really say what the quality of an AI-produced piece of work will be in 5 years time.

You are assuming it will be lower than a human-produced work, but it could be the opposite.

6

Surur t1_j6wdgel wrote

You seem knowledgeable on this issue. What about the version where they just lash ships together? Like in the great junk armada as depicted in Snow Crash by Neal Stephenson.

We see boats being scrapped all the time, so presumably, there is a supply of boats which could grow organically?

4

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

Surur t1_j6v0xyu wrote

As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

From our experience with AI systems, the shortest route to the result is what an AI optimises for, and if something is physically allowed it will be considered. Even if you think something is unlikely, it only has to happen once for it to be a problem.

Considering that humans have tried to take over the world, and they had all the same issues around the need to follow rules etc they are obviously not a real barrier.

In conclusion, even if you think something is very unlikely, this does not mean the risk is not real. Of something happens once in a million times it likely happens several times per day on our planet

1

Surur t1_j6uqj39 wrote

The most basic reason is that it would be an instrumental goal on the way to achieving its terminal goal.

That terminal goal may have been given to it by humans, leaving the AI to develop its own instrumental goals to achieve the terminal goal.

For any particular task, taking over the world is one potential instrumental goal.

For example, to make an omelette, taking over the world to secure an egg supply may be one potential instrumental goal.

For some terminal goal taking over the world may be a very logical instrumental goal e.g. maximise profit, ensure health for the most people, getting rid of the competition etc.

As the skill and power of an AI increases, the ability to take over the world becomes a more likely option, as it becomes easier and easier, and the cost lower and lower.

2

Surur t1_j6ugbht wrote

You are kind of ignoring that there are many jobs AI would be able to do better e.g. chip design for example or managing complex networks. Or understanding protein folding.

Even if you are curious and smart, you may not be the best person for the job.

For example, despite saying you are not lazy, you don't seem to have done much reading on the alignment problem, so you are not really qualified to discuss the issue.

6

Surur t1_j6ue5ml wrote

I'm too tired to argue, so I am letting chatgpt do the talking.

An AGI (Artificial General Intelligence) may run amok if it has the following conditions:

  • Lack of alignment with human values: If the AGI has objectives or goals that are not aligned with human values, it may act in ways that are harmful to humans.

  • Unpredictable behavior: If the AGI is programmed to learn from its environment and make decisions on its own, it may behave in unexpected and harmful ways.

  • Lack of control: If there is no effective way for humans to control or intervene in the AGI's decision-making process, it may cause harm even if its objectives are aligned with human values.

  • Unforeseen consequences: Even if an AGI is well-designed, it may have unintended consequences that result in harm.

It is important to note that these are potential risks and may not necessarily occur in all cases. Developing safe and ethical AGI requires careful consideration and ongoing research and development.

1