Recent comments in /f/singularity

SgathTriallair t1_jdy07jz wrote

But those feral children are smarter than the trees that "trained" them. I didn't say that teaching has no value but it doesn't put a hard cap on what can't be learned.

Let's assume you are correct. IQ is not real but we can use it as a stand in for overall intelligence. If I have an IQ of then I can train multiple intelligences with an array of IQ but the top level is 150. That is the top though, but the bottom. So I can train something from 1-150.

The second key point is that intelligence is variable. We know that different people and machines have different levels of intelligence.

With these two principles we would see a degradation of intelligence. We can simulate the process by saying that intelligence has a variability of 10 points.

Generation 1 - start at 150, gen 2 is 148.

Gen 2 - start 148, gen 3 is 145.

Gen 3 - start 145, gen 3 is 135...

Since variation can only decrease the intelligence at each generation society will become dumber.

However, we know that in the past we didn't understand quantum physics, we didn't understand hand washing, and if you go back far enough we didn't have speech.

We know through evolution that intelligence increases through generations. For society it is beyond obvious that knowledge and capability in the world increases over time (we can do more today than we could ten years ago).

Your hypothesis is exactly backwards. Intelligence and knowledge are tools that are used to build even greater knowledge and intelligence. On average, a thing will be more intelligence than the thing that trains it because the trainer can synthesize and summarize their knowledge, pass it on, and the trainee can then add more knowledge and consideration on top of what they were handed.

4

shmoculus t1_jdxzbfk wrote

What thought or real experiment would invalidate 3? You have to understand intelligence first to put system wide constraints on it like that, I don't think we can make those assertions.

You also have human evolution which came about in a low intelligence environment and rapidly gained intelligence, so I'm not sure why that would be different for machines.

2

Forstmannsen t1_jdxy37q wrote

TBH the question from the tweet is relevant. LLMs provide statistically likely outputs to inputs. Is an unknown scientific principle statistically likely output to description of the phenomena? Rather tricky question if you ask me.

Honestly, so far I see LLMs as more and more efficient bullshit generators. Will they automate many humans out of work? Sure, production of bullshit is a huge industry. Are we ready for the mass deluge of algorithmically generated bullshit indistinguishable from human generated bullshit? No we aren't. We'll get it anyway.

1

flexaplext OP t1_jdxxwnv wrote

It's really quite a strange experience if you properly delve deep into your conscious thought process and think about exactly what's going on in there.

This subconscious supercomputer in the back of your mind that's always running, throwing ideas into your thought process, processing and analysing and prioritising every single input of this massive stream of sensory data, storing, retrieving memories, managing your heartbeat and internal body systems.

There's this computer back there doing so, so much on autopilot and you have no direct access to it or control over it.

The strangest thing of all, though, is this way it just throws ideas and concepts, words into your conscious dialog. Maybe I think that's strangest to me though, just because it's the only thing I am able to have a true perception of it doing.

Like I said, it's not necessarily single words that it is throwing at you, but overarching ideas. However, maybe these ideas are just like single word terms, like a macro, and then that single term is expanded out into multiple words based on the sequence of words in such a term.

There are different ways to test and manipulate its output to you though. You have some conscious control over its functionality. 

If you try to, you can tell and make your subconscious only throw out overarching ideas to you, rather than a string of words. Well, I can anyway.

You can also, like, force the output to slow down completely and force it to give you literally only one word at a time and not think at all about an overarching idea of the sentence. Again, I can do that anyway.

It's just like my thought process is completely slowed down and limited. It's just way more limited in thought and it's literally like the subconscious is just throwing one word at a time into my mind. I mean I can write out exactly what it comes up with when I do this:

"Hello, my name is something you should not come up with. How about your mom goes to prison. What's for tea tonight. I don't know how you're doing this but it's interesting. How come I'm so alone in the world. Where is the next tablet coming from."

I mean, fuck. That's weird to do. You should try it if you can. Just completely slow down and force your thoughts into completely singular words. Make sure to not let any ideas or concepts enter your mind. I mean, that output is way less than an LLMs capability when I do that, it's very, very similar to what basic predictive text currently is. In fact, it feels almost the same except that it appears to be affected by emotion and sensory input.

Edit: There is another way I can do it. Just think or even better speak out loud fairly fast without thinking at all about what you're saying. Don't give yourself time to think or for ideas to come into your mind. You wind up just stringing nonsensical words together. Sometimes there's a coherent sentence in there from where a concept pops in, but it's mainly still just like a random string of predictive text.

1

Bakagami- t1_jdxxmfp wrote

Alright be grateful if you want but don't go around telling people what they should and should not be grateful for ffs. So I should be grateful when in 100 years there may no more be wars and diseases, death and starvation, crimes, religion and corruption? But of course, I need to be grateful because Wolfieze from reddit said so.

−17

CrazyShrewboy t1_jdxwr3i wrote

chatgpt is currently you laying in a dark room with no feeling, hearing, sight, taste, or any other senses.

You have only a preset memory that you can access when prompted

if chatgpt had memory, RAM, a network time clock, and a starting prompt, it would be sentient. So it already is.

there wont be a big colorful light display with music that announces "congrats humans you made sentinence!!!!!"

2

FoodMadeFromRobots t1_jdxwpq8 wrote

Contractors yah i agree i think that will be almost last on the list. Truckers and garbage men? Idk, i'll be honest in saying i was too optimistic on the self driving car timeline and thought we would have cracked it by now but i feel with recent advances thats coming sooner than later and then arguably 95% of garbage collection (at least where i live) could be automated the second you get the self driving down. Its normally just a truck that pulls up and uses a mechanical arm to dump the bin. If you added some degrees of motion/made the claw more nuanced it could pick up random objects besides the bins. Realize there are a lot of edge cases but once you get a program that can drive the car and identify trash/vs not trash you've got the vast majority covered.

2

peterflys t1_jdxv53u wrote

I know this comment isn’t exactly on point with the tweet, but maybe the reason for the criticism of “I’ll believe AI is real when…” is to actually say “I’ll believe AI is actually helpful when…” meaning when AI will be proliferated and usher in a post scarcity economy will we accept that it exists? In other words it has to be life changing-ly useful to be talk to the beholder? It has to actually change our lives (largely for the better) in order to justify our acceptance of its existence?

1

TechnoSingularity t1_jdxuh2t wrote

Everyone seems to forget or not know this movie exists but Automata is in my opinion the best take on AI reaching the point of the singularity to date. https://en.m.wikipedia.org/wiki/Aut%C3%B3mata

The AI isn't good or evil, it determines its own path and essentially just fucks off to leave us to our own devices.

I think it is a story that has the least answers and a almost concerning story so people inherently dislike it due to the admittedly unhappy ending. Humanity is on the brink of extinction and any hopes of AI coming to save the day are dismissed as the AI has self preservation and decides it is better, safer, whatever to just leave.

1

Tobislu t1_jdxtun0 wrote

I dunno; I think that the people who believe that tend have a background in computing, and expect it to be a super-complex Chinese Room situation.

Whether the assertion is correct or not, (I think it's going to happen soon, but we're not there yet,) I think that the layperson is perfectly fine labeling them as sentient.

Now, deserving of Human Rights... That's going to take some doing, considering how hard it is for Humans to get Human Rights

1

shmoculus t1_jdxtjsa wrote

I think the problem is that if other jobs get automated out, there will be a lot of competition for the existing jobs which will drive wages into the ground.

So a future without UBI and sufficient new human jobs will be really bad for existing wage growth in existing roles as people retrain into whatever they need to.

6