Recent comments in /f/singularity

BigZaddyZ3 t1_jdx67sp wrote

Both of your links feature relatively weak arguments that basically rely on moving the goal on what counts as “intelligence”. Neither one provides any concrete logistical issues that would actually prevent a singularity from occurring. Both just rely on pseudo-intellectual bullshit (imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)

You could even argue that the second link has already been debunked in certain ways tbh. Considering the fact that modern AI can already do things that the average human can not (such as design a near photorealistic illustration in mere seconds), there’s no question that even a slightly more advanced AI will be “superhuman” by every definition. Which would renders the author’s arrogant assumptions irrelevant already. (The author made the laughable claim that superhuman AI was merely science fiction 🤦‍♂️🤣)

21

jadams2345 t1_jdx5jvf wrote

>Ofcourse I can because it is purely logical. We made it, so we can predict how it thinks. Especially me as a AI engineer. I know which function it optimizes for.

The AI we have now only minimizes the cost function we specify, yes.

>AI doesn't even consider threats. It doesn't want to live like us. I think you confuse general AI with conscious AI. Conscious AI is a terrible idea other than experimentation.

Yes. I might have confused the two.

>And AI doing our bidding is just as fine for AI as not doing our bidding. It has no emotions, no fear, no anger, no point. It just exists and does what it is told to do. General AI just means that it can make use of tools so it can do anything that it is told to do.

Yes.

>Again even if it is consious and not under our control but without emotions. Why would it fight us? It could just move over to mars and not risk it's existence. Not to mention it can outperform us any day, so we aren't a threat.

Here I don’t agree. When it’s possible to take control, people do take control. Why would a conscious AI go to Mars??? It takes control here and makes sure humans can’t shut it down.

>There is no reason to think it would hurt us other than irrational fear. And there is no chance that AI will have irrational fear.

AI won’t hurt us because it fears us, no. Rather, because it wants to eliminate all its weaknesses, which is a very logical thing to do.

1

naivemarky t1_jdx518h wrote

The vast majority has almost no money saved, and owns almost nothing. You have listed options for the top 1%. If that's what it takes, then the general population is doomed.
The economy resides on the premise that everybody needs money and that they can survive with money aquired by working. If this is not possible for 80% of population, money will no longer have value. If the vast majority has a survival problem, guess what, everybody has a problem.

3

SkyeandJett t1_jdx4g9n wrote

AI containment isn't possible. At some point soon after company A. creates AGI and contains it some idiot at company B will get it wrong. We've basically got one shot at this so we better get it right and short of governments nuking the population back to the stone age you can't stop or slow down because again somebody somewhere is going to figure it out. Some moron on 4chan will bootstrap an AI into a recursive self-improvement loop without alignment and we're all fucked anyway. I'm not a doomer but we're near the end of this headlong rush into the future so we better not fuck it up.

19

JenMacAllister t1_jdx1l55 wrote

Simple, Have a AI create an app that people can send short messages to spread false or misleading information, bully or harass other people. Then allow any number of bots to control the positive or negative feedback controlled by a small number of people with an agenda. Then have the AI stand out of the way and let humanity destroy itself through conspiracy theories and really bad memes.

1