Recent comments in /f/singularity
a3cite t1_jdz3l9s wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It can't even do multiplications right (GPT-4).
often_says_nice t1_jdz3bqd wrote
Reply to comment by Bakagami- in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
I’m curious to hear why you feel this way. You’re saying the near future will bring extreme hardship? That’s a.. refreshingly pessimistic stance around here
GuyWithLag t1_jdz349i wrote
Reply to comment by The_Woman_of_Gont in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
>non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible
Have you read about Eliza, one of the first chatbots? It was created, what, 57 years ago?
Smellz_Of_Elderberry t1_jdz2zvg wrote
Reply to comment by D_Ethan_Bones in The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
Lol
Smellz_Of_Elderberry t1_jdz2z12 wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
My God, what ever would we do if suddenly ai started lying on the internet! No one ever lies on the internet!
We are screwed!
Bakagami- t1_jdz2hjg wrote
Reply to comment by bcuziambatman in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Yeah have fun telling yourself you're one of the lucky ones before you end up in the history books of future generations as a case of extreme hardship and ignorance. I'm sure those in paradise would agree with you.
[deleted] t1_jdz1q5i wrote
Reply to comment by Iffykindofguy in How are you viewing the prospect of retirement in the age of AI? by Veleric
[deleted]
audioen t1_jdz1ol1 wrote
Reply to comment by The_Woman_of_Gont in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.
I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.
Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.
theotherquantumjim t1_jdz19vm wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said
prion t1_jdz18zc wrote
Reply to Singularity is a hypothesis by Gortanian2
Whether we reach AGI or not the implications of what we already have are staggering once they are fully deployed.
All customer service jobs - GONE
Dramatically less need for assistances in medical and law.
Dramatically less need for child care, teaching, farming, yard work, house keeping, elder care, etc.
We are going to see a dramatic decrease in needed humans for gainful employment.
I would like to point out that neither business nor government have a plan to replace those jobs that are going to disappear in the next 10-20 years nor do we have any industries that are able to scale up and absorb available workers.
The impact on individual lives as well as the economy due to decreased consumption and massive defaults on car, home, and personal loans as well as the increase in homelessness and the stress on the social safety net will create a perfect storm if something is not put in place to redistribute the economic power of those businesses who replace human labor with automated labor.
And to be honest, they need to. It will dramatically most all aspects of the businesses that implement it. BUT,
Humans have to be cared for and must be considered first before the enrichment of business due to the ability to eliminate human labor. This is not negotiable. It can't be.
The outcome will be massive civil unrest if we try to do it any other way.
Massive civil unrest that will lead to civil war and if the US goes into civil war we will be fighting an unwinnable war on three fronts. Russia will invade from the East, and China from the west. Meanwhile our security forces will be fighting an internal war against their friends, their families, their neighbors, and their fellow citizens of the nation. And I'm betting that few in our military will be willing to kill people who are homeless and starving just so a minority class can get even richer.
Most people are not that heartless.
waytogokody t1_jdz0mgn wrote
Reply to comment by Art_from_the_Machine in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Brother this is the future. you are making the stepping stones of what's to come! Absolutely stunning work! Imagine what could be built with this idea from the ground up!
Northcliff t1_jdz0199 wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
well it doesn’t
Justdudeatplay t1_jdyzy03 wrote
So reading through it, I have a suggestion for you and it’s not going to easy to integrate. So the folks engaged in the astral projection threads… yes I said it “astral projection” are engaged in self discovery of consciousness that relies on specific quirks of the brain to give them access to the deep story telling capabilities of the Brian and or sentience. Out of body experience encapsulate the human Brain’s ability to be self aware through all kinds of trauma and Neuro chemical manipulation. Those of you working with AI that may want to try to emulate the human experience need to study these phenomenon and recognize that they are real experiences and not fantasy or imagination. I’m not saying that they are what they seem to be, but there is an internal imagery capability of a conscious mind that needs to be understood If an AI is ever going to mimic a human mind. I think it is vitally important, and I will walk any scientist through the methods to see. But if AI is going to progress, and if you are trying to model it based on human intelligence, then you need to take this seriously.
Northcliff t1_jdyzwmz wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It can’t do basic math yet, I think this guy is jumping the gun a bit here
Comfortable_Slip4025 t1_jdyzi9p wrote
Reply to Singularity is a hypothesis by Gortanian2
The "singularity" is an approximation. On a long enough timescale, current human advancements are already a near-singularity.
the_new_standard t1_jdyz1s3 wrote
Reply to comment by AnOnlineHandle in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
So here's the thing. I don't really care about what it's technically classified as.
For me I categorize AI but what end result it can produce. And at the moment it can produce writing, analysis, images and code. If any of that were coming from a human we wouldn't need to have an argument about training data. It doesn't matter how it does what it does. What matters is the end result.
MultiverseOfSanity t1_jdyz0ch wrote
Reply to comment by acutelychronicpanic in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.
So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.
SkySake t1_jdyyy2x wrote
DaCosmicHoop t1_jdyywd7 wrote
Reply to comment by eve_of_distraction in Singularity is a hypothesis by Gortanian2
"Everyone dies but then realizes they actually are trapped inside 'I have no mouth but I must scream' and spend eternity there."
MultiverseOfSanity t1_jdyyr0u wrote
Reply to comment by Koda_20 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?
And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.
For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?
LebronManning t1_jdyypqw wrote
Reply to comment by MultiverseOfSanity in If you went to college, GPT will come for your job first by blueberryman422
just facts on facts
Fickle_Ad_3554 t1_jdyym9k wrote
Reply to comment by yaosio in How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
Go vegan - no more cancer
NanditoPapa t1_jdyyeko wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
I think the 63% of Americans that call themselves Christians are absolutely treated as normal. When 77% of adult Americans say they believe in angels, that's normalizing. If someone espouses their faith, nobody bats an eye.
Anyway, at least we both agree that being excited for a possible fact-based optimistic future is a good thing.
The_Woman_of_Gont t1_jdyy87t wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).
It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.
Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.
[deleted] t1_jdz3nck wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
[deleted]