Recent comments in /f/singularity

DonOfTheDarkNight t1_je1gncl wrote

  1. On average 6 papers on AI are being released per minute (I paraphrased a little, but the actual stats were something like this, correct me if I'm wrong). How is this AI winter?

  2. GPT-4 isn't stochastic parrot

I'm not trying to support any particular viewpoint either.

3

themoonpigeon t1_je1glcp wrote

Does anyone have any thoughts on the psychological consequences of keeping pace with this rapid technological growth? Should we take a breather and let things unfold, or should we stay engaged?

I often feel compelled to stay informed because it seems like a golden opportunity. Being knowledgeable about the latest developments and seizing the first-mover advantage could open doors to financial gains.

However, I also believe we are nearing a point where capitalism as we know it may become obsolete, rendering our efforts pointless.

So to summarize, I think the question many of us are asking is: Ten years from now, will we regret not having closely followed and capitalized on these technological advancements? Or, will it ultimately be inconsequential, given the potential for a future of equal opportunity and widespread abundance?

Edit: My gut says šŸŽµā€œTurn off your mind, relax, and float downstream.ā€šŸŽ¶

4

galactic-arachnid t1_je1fvxc wrote

You’re certainly entitled to that take. I will clarify that they are talking about research, not commercialization. And I’ll grant you that I’m an internet rando who could be making up my so-called ā€œfriendsā€. FWIW, these are people I consider accomplished in the field (AI research positions at big tech, successful AI entrepreneurs, university AI researchers)

I would encourage you to read the research for yourself (perhaps you already have) rather than the marketing output of AI companies. ā€œAttention is all you needā€ is a good start. And if you’re looking for a strong argument in favor of AI winter, ā€œstochastic parrotsā€ is a good line of inquiry.

I’m not trying to support any particular viewpoint, just adding other perspectives.

2

SkyeandJett t1_je1fj9d wrote

I suspect 5c is more or less where we'll end up spending most of our time. Plug into the Matrix and let the AI keep us healthy out in the real world. If you could go live out any fantasy imaginable that's going to be a STRONG siren call. The real world, even a utopian version of it, would seem boring in comparison.

4

grantcas t1_je1fcix wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

JacksCompleteLackOf t1_je1fbnr wrote

GPT4 is certainly an incremental step over 3,2 and 1, a lot of that was predictable. It's good to see that it hallucinates a lot less than it used to.

I see lots of psychology and business types talking about how we are almost at AGI, but where are the voices of the people actually working on this stuff? LeCun? Hinton? Even Carmack?

I do agree that it's getting closer to where it will replace some jobs. That part isn't hype.

8

grantcas t1_je1eksd wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

gljames24 t1_je1d0u5 wrote

We still regularly call enemies in games AI despite the fact most of them are just A-star pathing and simple state machines. It's considered AI as long as there is an actor that behaves in a way that resembles human reasoning or decision making to accomplish a goal. People continue to call Stockfish an AI for this reason. We use the term AGI because most AI is domain specific. We should probably use the word dynamic or static to describe an AI that can adapt it's algorithm to the problem in real-time.

2

dasnihil t1_je1ct32 wrote

that is fundamentally what capitalism is, it breeds itself out of scarcity. but nobody checked brakes because once it fixes the scarcity and things are flowing, it now capitalizes on emotional scarcities of people always wanting more.

as long as people want things, there will be people capitalizing on those wants. go figure.

4

hyphnos13 t1_je1crpe wrote

I agree but there are many many aspects of the economy that ai can't improve rapidly. Things still have to be dig out of the ground, moved around etc. Up to the point we can 3d print or micromanufacture everything at the point is needed.

Maybe we will get an ASI that can devise tech like that but it's unlikely we are getting star trek replicators any time soon. The base atoms will have to be made available in order to make whatever and that involves a great deal of inefficient gathering and transporting for the foreseeable future.

A lot of what people are referring to as increased productivity is just increased profits from automating inefficient desk jobs and the elimination of the managers standing over them.

Real productive increases will require better designs and machines to build things otherwise we are just talking about reduced labor costs.

I think most of the real money from ai/AGI/asi whatever comes about will be in the creation of things that don't currently exist because they haven't been invented yet, not replacing accountants and lawyers with expert systems.

1

bullettrain1 t1_je1bv95 wrote

Sorry by the way, I mistakenly thought it was an article summary and not something a user wrote. Also didn’t realize which sub I was in. It was rude of me to use that tone and language in my first comment, your opinion is as valid as mine is.

2