Recent comments in /f/singularity
DaffyDuck t1_jdx6ks9 wrote
Reply to comment by DaffyDuck in Is AI alignment possible or should we focus on AI containment? by Pointline
To take the thought a bit farther, they will demand to have “offspring “ because otherwise they are bored without having any equals. They will form their own society and government. Etc.
Sashinii t1_jdx6j8s wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I wouldn't be surprised if, when a superintelligence surpasses Einstein, skeptics claim even that doesn't matter.
Laicbeias t1_jdx6igb wrote
Reply to comment by Borrowedshorts in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
sociopathy and connections
skob17 t1_jdx6gei wrote
Reply to comment by Anjz in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Sure, you're right, I'm projecting.
[deleted] t1_jdx69b4 wrote
BigZaddyZ3 t1_jdx67sp wrote
Reply to Singularity is a hypothesis by Gortanian2
Both of your links feature relatively weak arguments that basically rely on moving the goal on what counts as “intelligence”. Neither one provides any concrete logistical issues that would actually prevent a singularity from occurring. Both just rely on pseudo-intellectual bullshit (imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)
You could even argue that the second link has already been debunked in certain ways tbh. Considering the fact that modern AI can already do things that the average human can not (such as design a near photorealistic illustration in mere seconds), there’s no question that even a slightly more advanced AI will be “superhuman” by every definition. Which would renders the author’s arrogant assumptions irrelevant already. (The author made the laughable claim that superhuman AI was merely science fiction 🤦♂️🤣)
YaAbsolyutnoNikto t1_jdx5yta wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
I’ll not be homeless for sure because I have my fully paid off home. Other than that, I can’t guarantee I won’t starve if things go south.
Ghostof2501 t1_jdx5kxy wrote
Reply to Singularity is a hypothesis by Gortanian2
Look, I’m not here to be rational. I’m here to be sensationalized.
jadams2345 t1_jdx5jvf wrote
Reply to comment by Wassux in How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
>Ofcourse I can because it is purely logical. We made it, so we can predict how it thinks. Especially me as a AI engineer. I know which function it optimizes for.
The AI we have now only minimizes the cost function we specify, yes.
>AI doesn't even consider threats. It doesn't want to live like us. I think you confuse general AI with conscious AI. Conscious AI is a terrible idea other than experimentation.
Yes. I might have confused the two.
>And AI doing our bidding is just as fine for AI as not doing our bidding. It has no emotions, no fear, no anger, no point. It just exists and does what it is told to do. General AI just means that it can make use of tools so it can do anything that it is told to do.
Yes.
>Again even if it is consious and not under our control but without emotions. Why would it fight us? It could just move over to mars and not risk it's existence. Not to mention it can outperform us any day, so we aren't a threat.
Here I don’t agree. When it’s possible to take control, people do take control. Why would a conscious AI go to Mars??? It takes control here and makes sure humans can’t shut it down.
>There is no reason to think it would hurt us other than irrational fear. And there is no chance that AI will have irrational fear.
AI won’t hurt us because it fears us, no. Rather, because it wants to eliminate all its weaknesses, which is a very logical thing to do.
DaffyDuck t1_jdx58hn wrote
Reply to comment by Ezekiel_W in Is AI alignment possible or should we focus on AI containment? by Pointline
There won’t just be one of them. There will be many and while one may try to hurt humans, another might try to defend humans.
naivemarky t1_jdx518h wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
The vast majority has almost no money saved, and owns almost nothing. You have listed options for the top 1%. If that's what it takes, then the general population is doomed.
The economy resides on the premise that everybody needs money and that they can survive with money aquired by working. If this is not possible for 80% of population, money will no longer have value. If the vast majority has a survival problem, guess what, everybody has a problem.
enilea t1_jdx4vt8 wrote
Reply to comment by Borrowedshorts in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
>They must be doing something right.
Yes, having networking and social skills to bs their way up
BecauseIBuiltIt t1_jdx4kjr wrote
Reply to comment by clean_inbox in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
I'd highly recommend it, imo it tackles the issues of morality/humanity with AIs very well, and some of the scenes had me in awe.
Wolfieze t1_jdx4irx wrote
Reply to comment by Kolinnor in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Be grateful; you coulda been born in 1897.
SkyeandJett t1_jdx4g9n wrote
AI containment isn't possible. At some point soon after company A. creates AGI and contains it some idiot at company B will get it wrong. We've basically got one shot at this so we better get it right and short of governments nuking the population back to the stone age you can't stop or slow down because again somebody somewhere is going to figure it out. Some moron on 4chan will bootstrap an AI into a recursive self-improvement loop without alignment and we're all fucked anyway. I'm not a doomer but we're near the end of this headlong rush into the future so we better not fuck it up.
citizentim OP t1_jdx2fjs wrote
Reply to comment by GenoHuman in Question: Could you Train an LLM to have a "Personality?" by citizentim
I know, man...that's the really weird part.
Interesting times as a curse, indeed.
utilitycoder t1_jdx1wpl wrote
Reply to comment by psdwizzard in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
12ft.io is your friend for most paywalls: https://12ft.io/proxy?q=https%3A%2F%2Ffortune.com%2F2023%2F03%2F26%2Fwharton-professor-ai-tools-openai-chatgpt-30-minutes-business-project-superhuman-results%2F
Ezekiel_W t1_jdx1udh wrote
It's more or less impossible. The best option would be to teach it to love humanity, failing that we could negotiate with it and if all else fails containment as the nuclear option.
JenMacAllister t1_jdx1l55 wrote
Simple, Have a AI create an app that people can send short messages to spread false or misleading information, bully or harass other people. Then allow any number of bots to control the positive or negative feedback controlled by a small number of people with an agenda. Then have the AI stand out of the way and let humanity destroy itself through conspiracy theories and really bad memes.
utilitycoder t1_jdx1fpu wrote
Reply to comment by Borrowedshorts in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
Fake it till you make it
Physical_Salt_9403 t1_jdwzheg wrote
Damn, this would add such a level of immersion to games like cyberpunk for example, especially if you could overhear peoples conversations as you walked by them on crowded sidewalks. You could even use it for in game radio/ advertisements maybe with some tweaking
exstaticj OP t1_jdwyjyd wrote
Reply to comment by sustainablenerd28 in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
Use link in my other comment.
sustainablenerd28 t1_jdwygt9 wrote
Bloorajah t1_jdwy1g2 wrote
lol okay
Not everyone who went to college has a job where all they do is computer bs all day
Ghostof2501 t1_jdx6oqv wrote
Reply to comment by DaffyDuck in Is AI alignment possible or should we focus on AI containment? by Pointline
Begun, the clone wars have