Recent comments in /f/singularity

audioen t1_jdz1ol1 wrote

LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.

I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.

Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.

2

theotherquantumjim t1_jdz19vm wrote

I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said

2

prion t1_jdz18zc wrote

Whether we reach AGI or not the implications of what we already have are staggering once they are fully deployed.

All customer service jobs - GONE

Dramatically less need for assistances in medical and law.

Dramatically less need for child care, teaching, farming, yard work, house keeping, elder care, etc.

We are going to see a dramatic decrease in needed humans for gainful employment.

I would like to point out that neither business nor government have a plan to replace those jobs that are going to disappear in the next 10-20 years nor do we have any industries that are able to scale up and absorb available workers.

The impact on individual lives as well as the economy due to decreased consumption and massive defaults on car, home, and personal loans as well as the increase in homelessness and the stress on the social safety net will create a perfect storm if something is not put in place to redistribute the economic power of those businesses who replace human labor with automated labor.

And to be honest, they need to. It will dramatically most all aspects of the businesses that implement it. BUT,

Humans have to be cared for and must be considered first before the enrichment of business due to the ability to eliminate human labor. This is not negotiable. It can't be.

The outcome will be massive civil unrest if we try to do it any other way.

Massive civil unrest that will lead to civil war and if the US goes into civil war we will be fighting an unwinnable war on three fronts. Russia will invade from the East, and China from the west. Meanwhile our security forces will be fighting an internal war against their friends, their families, their neighbors, and their fellow citizens of the nation. And I'm betting that few in our military will be willing to kill people who are homeless and starving just so a minority class can get even richer.

Most people are not that heartless.

4

Justdudeatplay t1_jdyzy03 wrote

So reading through it, I have a suggestion for you and it’s not going to easy to integrate. So the folks engaged in the astral projection threads… yes I said it “astral projection” are engaged in self discovery of consciousness that relies on specific quirks of the brain to give them access to the deep story telling capabilities of the Brian and or sentience. Out of body experience encapsulate the human Brain’s ability to be self aware through all kinds of trauma and Neuro chemical manipulation. Those of you working with AI that may want to try to emulate the human experience need to study these phenomenon and recognize that they are real experiences and not fantasy or imagination. I’m not saying that they are what they seem to be, but there is an internal imagery capability of a conscious mind that needs to be understood If an AI is ever going to mimic a human mind. I think it is vitally important, and I will walk any scientist through the methods to see. But if AI is going to progress, and if you are trying to model it based on human intelligence, then you need to take this seriously.

3

the_new_standard t1_jdyz1s3 wrote

So here's the thing. I don't really care about what it's technically classified as.

For me I categorize AI but what end result it can produce. And at the moment it can produce writing, analysis, images and code. If any of that were coming from a human we wouldn't need to have an argument about training data. It doesn't matter how it does what it does. What matters is the end result.

0

MultiverseOfSanity t1_jdyz0ch wrote

Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.

So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.

16

MultiverseOfSanity t1_jdyyr0u wrote

There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?

And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.

For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?

1

NanditoPapa t1_jdyyeko wrote

I think the 63% of Americans that call themselves Christians are absolutely treated as normal. When 77% of adult Americans say they believe in angels, that's normalizing. If someone espouses their faith, nobody bats an eye.

Anyway, at least we both agree that being excited for a possible fact-based optimistic future is a good thing.

1

The_Woman_of_Gont t1_jdyy87t wrote

Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).

It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.

Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.

6