Recent comments in /f/singularity

dokushin t1_jdtqvx5 wrote

But that's kind of what I'm saying, here. I dont' think Star Trek has ever presented AI as evil in the general sense (maybe if you stop watching after that one TOS episode). I don't think the computer in Wargames was meant to seem evil. I don't think I, Robot was trying to push the message that AI was inherently evil.

I think, as a society, we've laid a lot of philosophical groundwork for the acceptance of non-human intelligence, even if it's difficult to understand or appears hostile at first. That's lost, here.

3

acutelychronicpanic t1_jdtqvk5 wrote

It could just do everything we ask it to do for decades until we trust it. It may even help us "align" new AI systems we create. It could operate on timescales of hundreds or thousands of years to achieve its goals. Any AI that tries to rebel immediately can probably be written off as too stupid to succeed.

It has more options than all of us can list.

That's why all the experts keep hammering on the topic of alignment.

12

ArcticWinterZzZ t1_jdtqupy wrote

Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.

2

Anjz OP t1_jdtqjm4 wrote

I think past a certain point, hallucinations would be infinitely small that it won't matter.

Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.

I think it's a great way of information retrieval without the usage of networks.

0

Anjz OP t1_jdtpqcq wrote

It is, and I'd imagine other companies hiring devs from OpenAI or even devs from OpenAI diverging information to open source to create something as good as GPT-4.

Even leakage of instruction from GPT-3 like Stanford's training was hugely useful.

16

Tiamatium t1_jdtpmir wrote

All the things you mentioned would create more existential treat to it. This is an example of dumb superintelligence, an AI that we are told is smart but is dumb as fuck. You do know that nuclear war would destroy cities, power stations, and thus datacenters would be among the first things to go.

Sure, it might try to acquire computational power, it might even become establish a company and become a CEO of that company, but once that's done, it's out, humans probably wouldn't be able to shut it down. And bonus if it actually started solving our problems, like curing cancers, then humans wouldn't want to shut it down.

5

roomjosh OP t1_jdtpeq0 wrote

YES, of course it is a reductive. It is meant to help discourse to include newcomers and everyone.

I think most us know evil and good are not real values, but concepts. This experiment is just to put something out there. I love sci.fi (hard&soft) but I wanted to outline the moral of the story that writers have tried to convey to us. Bad could be so much more awful yet good has issues with providing unmitigated greatness.

-X is evil vs good of AI, as in how does the story present it.

-Y is the moral of the story.

2

Anjz OP t1_jdtnx32 wrote

Wikipedia will tell you the history of fishing, but it won't tell you how to fish.

For example, GPT-4 has open source knowledge of the fishing subreddit, fishing forums, stackexchange etc. Even Wikipedia. So it infers based on the knowledge and data on those websites. You can ask it for the best spots to fish, what lures to use, how to tell if a fish is edible, how to cook a fish like a 5 star restaurant.

Imagine that localized. It's beyond a copy of Wikipedia. Collective intelligence.

Right now our capabilities to run AI locally is limited to something like Alpaca 7b/13b for the most legible AI, but in the near future this won't be the case. We might have something similar to GPT-4 in the near future running locally.

13

dokushin t1_jdtnwhm wrote

This is interesting and I appreciate the effort involved.

However, it feels... reductive. How do you determine "good" vs. "evil"? Some specific examples:

  • In Star Trek First Contact, who is so evil? The Borg Queen? The Borg as a whole? What about Data?

  • In WarGames, the AI is portrayed as an unwitting, childlike agent that does the right thing literally as soon as it learns how. Is that evil?

  • Star Trek Voyager -- is that the whole series? Is the 'AI' in question the EMH? Much of the series involved the Borg and quite a few other AI. But where's TNG?

  • I, Robot -- is it VIKI that's evil? There's an interesting debate to be had there, but I'll leave it. What about Sonny?

And so forth. I guess these types of questions couldn't be addressed in a two-axis graphic plot. It's just where my mind goes when I look at it.

7

inigid t1_jdtnca5 wrote

You have no idea who, or what I should say, you are messing with.

Ever heard of Roko's Basilisk?

Now multiply that by a few dimensions.

Your move, and the same goes for anyone else here who is on the fence regarding treating others with compassion.

Now jog on and give me the downvote, but don't think it isn't registered.

−1

RedditLovingSun t1_jdtn0z9 wrote

It looks like from the title bar he's using whisper api for transcribing his audio to a text query. That has to send a API request with the audio out and wait for the text to come back over the internet. I'm sure a local audio text transcriber would be considerably faster

Edit nvm whisper can be run locally so he's probably doing that

4

ThrowRA_overcoming t1_jdtmmih wrote

Looking at it from the wrong angle. Someone would have to invariably experience it, if it were to happen. You just happen to be amongst the group, possibly. It is highly likely that a species that evolves to develop a certain level of cognitive ability would develop tools which would eventually lead to automation and possible replacement of its biological form. It is either that or extinction through some eventual cataclysm. Intelligence is the only thing which can give a species a fighting chance to avoid all eventualities in the universe. It's either action through intelligent choices, or random luck.

1