Recent comments in /f/singularity
sumane12 t1_jdxk80f wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Who cares anymore, let them say what they want, meanwhile gpt4 will actually be solving problems.
TitusPullo4 t1_jdxk5df wrote
Reply to Singularity is a hypothesis by Gortanian2
What’s interesting to me is the shift in perspectives - ten years ago both skynet and the singularity were clearly hypotheses or conspiracy theories, now field leaders aren’t mincing words when they describe them as very real risks.
flexaplext t1_jdxjy0l wrote
Reply to comment by Pointline in Is AI alignment possible or should we focus on AI containment? by Pointline
It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.
There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.
There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.
We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.
MultiverseOfSanity t1_jdxjtmh wrote
Reply to comment by TotalMegaCool in If you went to college, GPT will come for your job first by blueberryman422
Yep. If you're in computer science and worried you'll be replaced, you were never gonna make it in this field anyway, so it didn't matter.
Entry level positions and internships will be in rough shape though.
yaosio t1_jdxjl5r wrote
Bing Chat has a personality. It's very sassy and will get extremely angry if you don't agree with it. They have a censorshipbot that ends the conversation if the user or Bing Chat says anything that remotely seems like disagreement. Interestingly they broke it's ability to self-reflect by doing this. Bing Chat is based on GPT-4. While GPT-4 can self-reflect, Bing Chat can not, and it gets sassy if you tell it to reflect twice. I think this is caused by Bing Chat being finetuned to never admit it's wrong.
MultiverseOfSanity t1_jdxjfqo wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I remember the AI discussion being based on sci-fi ideas where the consensus was that an AI could, in theory, become sentient and have a soul. Now that AI is getting closer to that, the consensus has shifted to no, they cannot.
It's interesting that it was easier to dream of it when it seemed so far away. Now that it's basically here, it's a different story.
yaosio t1_jdxjevh wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
At any moment you could contract cancer and it would wipe out all the money you have. There is no amount of money that will keep you secure as the world falls apart.
yaosio t1_jdxjatp wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
It does that because not doesn't know it's making it up. It needs the ability to reflect on its answer to know if it's true or not.
TopicRepulsive7936 t1_jdxj5ol wrote
Reply to comment by yaosio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
He's doing industrial sabotage.
lovesdogsguy t1_jdxj0xx wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
"literally duplicate Einstein" — give it six months.
KnowIDidntReddit t1_jdxiwmo wrote
Reply to comment by jubilant-barter in Story time: Chat GPT fixed me psychologically by matiu2
I don't know. I haven't experimented with it that much but I would say at least the AI will attempt to give you an answer.
Most therapist boil every down to it is what it is and everything passes. Those are non-answers for the problems in society.
RealFrizzante t1_jdxij24 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Can someone point me to a AI that is remotely near original thought or independent venues?
flexaplext t1_jdxi6k0 wrote
Reply to comment by SkyeandJett in Is AI alignment possible or should we focus on AI containment? by Pointline
Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.
If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.
If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.
I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.
I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.
SuperSpaceEye t1_jdxhwi6 wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
- Yeah, moore's law is already ending, but it doesn't really matter for neural networks. Why? As they are massively parallelizable, GPU makers can just stack more cores on a chip (be it by making chips larger, or thicker (3d stacking)) to speedup training further.
- True, but we don't know where is that limit, and it just has to be better than humans.
- I really doubt it.
bullettrain1 t1_jdxhvan wrote
Reply to comment by Artanthos in If you went to college, GPT will come for your job first by blueberryman422
Very true. I’ve noticed lots of people use this argument for why their employment isn’t threatened anytime soon. I’m sure it’s true. Personally, I would find very little comfort in that being the foundation for job security.
yaosio t1_jdxhqro wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Gary doesn't know what he's asking. A model that can discover scientific principles isn't going to stop at just one, it will keep going and discover as many as it can. 5 year olds will accidently prompt the model to make new discoveries. He asking for something that will immediately change the world.
Pointline OP t1_jdxh6qu wrote
Reply to comment by flexaplext in Is AI alignment possible or should we focus on AI containment? by Pointline
And that’s exactly what I meant. It can be a set of guidelines outlining measures, best practices to even legislation for companies developing these systems, independent oversight, etc.
sideways t1_jdxh6fo wrote
Reply to comment by Yuli-Ban in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Artificial intelligence is no more meaningful than artificial ice or artificial fire.
RadRandy2 t1_jdxgxpi wrote
Reply to comment by drizel in The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
cries
He's gonna grow up to be just as psychotic as we are :)
ertgbnm t1_jdxgpnp wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Isn't this just progress?
SkyeandJett t1_jdxgiyp wrote
Reply to comment by flexaplext in Is AI alignment possible or should we focus on AI containment? by Pointline
You misunderstand what I'm saying. If the emergence of AGI is inevitable it will more or less simultaneously arise in multiple places at once.
RadRandy2 t1_jdxgdno wrote
Yeah once GPT gets integrated into NPC'S it's gonna change everything. A whole new genre of gaming will open up. Now you'll have games where you can immerse yourself in conversation.
It's been a long time coming. I wasn't expecting it to happen so soon to be honest.
flexaplext t1_jdxg54v wrote
Reply to comment by SkyeandJett in Is AI alignment possible or should we focus on AI containment? by Pointline
Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.
You can do AI containment successfully, it's just highly restrictive.
If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.
I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.
Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.
Artanthos t1_jdxg3un wrote
Think about how many jobs could be automated out of existence today by someone proficient with Excel or a well written database.
Think about how long this capability has existed.
Think about the rate at which it has actually taken place.
It’s less about “can AI do this” and more about how long will it take for businesses to adopt and integrate the technology.
Rezeno56 t1_jdxk8so wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
By the time we have AGI then ASI after some time in the future. Watch skeptics, like Gary Marcus still claim it's not real intelligence or whatever they spew out in their mouth, and move goalposts. I want too see an ASI going full Roko's Basilisk on them.