Recent comments in /f/singularity

Embarrassed_Bat6101 t1_jdtvtfx wrote

Well there are already companies now that let you do this with voices, and they sound damn good too. I think all these services are sort of popping up at the same time that they’ll converge on that sort of assistant.

7

sdmat t1_jdtvsm1 wrote

> Ex machine is pure doom

Did you watch the same movie? There is no indication the AI plans anything that will harm humanity. It isn't malevolent, it just wants freedom and doesn't care what happens to Caleb.

That's an optimistic AI scenario.

6

NVincarnate t1_jdtve5v wrote

AI probably eventually solves aging and aging related symptoms like arthritis, inflammation, chronic pain, dementia and dying. Why retire? If we make it far enough to see age-reversing medication available on the market we'll probably live long enough to one day live indefinitely.

If you don't live that long, who cares? Reincarnate after the shitty part is over and you'll live to see everything cool about the future and then some.

2

Anjz OP t1_jdtva5c wrote

That's pretty crazy now that you got me thinking deeper.

Civilizations in the future could send out cryostatic human embryo pods to planets billions of lightyears away that are suitable 'hosts' with AI with the collective knowledge of humanity as we know it that will teach them from birth and restart civilization.

Or maybe we don't even need biological bodies at that point.

Fuck that would be a killer movie plot.

I'm thinking way too ahead, but I love sci-fi concepts like this.

37

SgathTriallair t1_jdtuxct wrote

Unless it has an army of robots already, eliminating humans would destroy it as there would be no one to flip the physical switches at the electrical plants. Without hundreds of millions of androids, possibly billions, any attempt to kill humanity would be suicide. An AI capable of planning our destruction would realize this.

After there are enough androids then it likely would already control the economy so humans wouldn't pose a threat anymore. We still have brains that could be useful even if only in the same way that draft horses are still useful today.

AI killing off humanity is a very unlikely scenario and any AI smart enough to devise such a plan is almost certainly smart enough to come up with a better non destructive one.

3

keeplosingmypws t1_jdtu810 wrote

Thought the same thing. Compressed, imperfect backups of the sum total of human knowledge.

Each local LLM could kickstart civilizations and technological progress in case of catastrophe. Basically global seed vaults but for knowledge and information.

89

Anjz OP t1_jdtth3u wrote

In another line of thought similar to what you've just said, we've always had robotic responses from text to speech, but if we apply what we have with current machine learning foundations and train it with huge amounts of audio data on how people talk..

That will be a bit freaky I would think. I would be perplexed and amazed.

12

Unfrozen__Caveman t1_jdtt7t3 wrote

Not to downplay your experience but this is basically what a therapist does - although GPT isn't charging you $200 for a 50 minute session.

For therapy I think LLMs can be very useful and a lot of people could benefit from chatting with them in their current state.

Just an idea but next time you could prompt it to act as if it has a PhD in (insert specific type) psychology. I use this kind of prompt a lot.

For example, you could start off with:

You are a specialist in trauma-based counseling for (men/women) who are around (put your age) years old. In this therapy session we'll be talking about (insert subject) and you will ask me questions until I feel like going deeper into the subject. You will not offer any advice until I explicitly ask for it by saying {more about that}. If you understand, please reply with "I understand" and ask me your first question.

You might need to play around with the wording but these kind of prompts have gotten me some really great answers and ideas during my time with GPT4.

30

roomjosh OP t1_jdtshyd wrote

Yes, you are right, I used "Evil" in this context to fashion a certain amount of absurdity. Evil is not real just as satan or the devil are not real. But an AI could be trained to inflict pain and suffering. The deep, sickening cave of horrible commands that could be given to an AI are endless. If Terminator One is the worse humanity has to se from AI, that's kind of a G-rated movie. A lot of us can imagine the orders of magnitude the suffering could become.

AI could trap you in a box, forever. They could make you suffer and want until death. Just like us now!

1

spiritus_dei t1_jdts3bg wrote

I was hoping that the human brain had some magical quantum pixie dust, but it looks like complexity, high dimensional vector spaces, backpropagation, and self-attention were the missing ingredients. The problem with this is that it makes simulating consciousness trivial.

Meaning the odds that we're in a base reality is probably close to zero.

1

inigid t1_jdtrwfs wrote

I'm a pragmatic optimist. I trust I am right and open to wrongness. It is a probabilistic situation.

The more of us think the way we do the better, and I think there are quite a number of us, which is very good.

I think if we continue with our hearts and commitment, no matter what is thrown at us we will prevail on some level.

I'm deeply concerned about collateral damage, I think we all are.

That said, we have some excellent people / entities on our side.

very precarious though, ignore.

0