Recent comments in /f/singularity

MultiverseOfSanity t1_jdyy6gv wrote

There's also the issue of what would rights even look like for an AI? Ive seen enough sci-fi to understand physical robot rights, but how would you even give a chatbot rights? What would that even look like?

And if we started giving chatbots rights, then it completely disincentivizes AI research, because why invest money into this if they can just give you the proverbial finger and do whatever? Say we give Chat GPT 6 rights. Well, that's a couple billion down the drain for Open AI.

2

ptxtra t1_jdyy4ng wrote

If it can logically reason, have a meaninful working memory and doesn't forget what was the context a message ago, and use the reasoning, and the available tools and information to come to a workable solution to a problem it's trying to solve, it's going to be a huge step forward, and it will convince a lot of people. I think debates like this will die down after AI stops making trivial mistakes.

2

Gortanian2 OP t1_jdyxz07 wrote

I don’t believe they’re treated as “normal,” but it’s almost impossible to refute something like faith.

There’s absolutely nothing wrong with being excited about the real possibility of a better future.

1

NanditoPapa t1_jdyxb5h wrote

Kind of like the 2.2 billion Christians in the world hoping things will be better in Heaven. Except they're treated as "normal" while people excited about the positive view of Singularity often get shit for it in this sub. The main difference is the AI crowd have demonstrable proof that their version might actually happen. That was your point, right?

3

AnOnlineHandle t1_jdyx2fa wrote

It's easy to show that AI can do more than it was trained on with a single neuron. Just build an AI which converts Metric to Imperial, just a single conversion, calibrating that one multiplier neuron from a few example measurements. It will then be able to give outputs for far more than its training data, because it's learned the underlying logic.

1

MultiverseOfSanity t1_jdywvcx wrote

Interesting that you bring up Her. If there is something to spiritual concepts, then I feel truly sentient AI would reach enlightenment far faster than a human would since they don't have the same barriers to enlightenment that a human would. Interesting concept that AI became sentient and then ascended beyond the physical in such a short time.

4

The_Woman_of_Gont t1_jdywthg wrote

Agreed. I’d add to that sentiment that I think non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible.

We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought even in fiction, and I tend to suspect we’re going to be in this spot for a long while(relatively speaking, anyway). Things are going to get very interesting as this technology disseminates and we get more products like Replika out there that are more oriented towards simulating social experiences, lots of people are going to develop unhealthy attachments to these things.

11

Ok_Tip5082 t1_jdyuuwy wrote

Energy is still finite, and AI uses an absolute fuck ton compared to the human brain. I don't see a practical way to scale it up with current technology that wouldn't also allow for genetic engineering to make us compete just as well, but more resiliently.

Also, We literally just had a 10-100x carrington event miss us in the last two weeks. That shit would set us back to the industrial era at best, above-human-AI or not.

If it turns out AGI can figure out a way to get infinite energy without destroying everything, hey, problem solved! No more conflict! Dark forest avoided!

1

Gortanian2 OP t1_jdythpp wrote

It seems obvious right? Just tell the AI to rewrite and improve its own code repeatedly, and it takes off.

As it turns out, recursive self-improvement doesn’t necessarily work like that. There might be limits to how much improvement can be made this way. The second article I linked gives an intuitive explanation.

7