Recent comments in /f/singularity

BigZaddyZ3 t1_jdxfpvo wrote

Okay but even these aren’t particularly strong arguments in my opinion :

  1. The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…

  2. It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.

  3. This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.

12

CypherLH t1_jdxe9w2 wrote

This is so true. I'm in a discussion group that is generally very skeptical of AI. A typical example of their goal post shifting is going from "haha, GPT3 can barely rhyme and can't do proper poetry" in 2021 to "well GPT-4 can't write a GREAT masterful poem though" now. Apply this across every domain...the ability of AI skeptics to move the goal posts is unbounded.

13

Gortanian2 OP t1_jdxdpev wrote

Thank you for your response. The logistical issues I see in these articles that get in the way of unbounded recursive self-improvement, which is thought my many to be the main driver of a singularity event, are as follows:

  1. The end of moore’s law. This is something that the CEO of Nvidia himself has stated.
  2. The theoretical limits of algorithm optimization. There is such a thing as a perfect algorithm, and optimization beyond that is impossible.
  3. The philosophical argument that an intelligent entity cannot become smarter than its own environment or “creator.” A single person did not invent chatGPT, is instead the culmination of the sum total of civilization today. In other words, civilization creates AI, which is a dumber version of itself.

I do not believe these arguments are irrefutable. In fact, I would like them to be refuted. But I don’t believe you have given the opposition a fair representation.

3

flexaplext OP t1_jdxc3bh wrote

I actually can't do those things. As part of aphantasia I can't generate virtual vision, virtual taste, virtual smell or virtual touch at all.

I can only generate virtual sound in my head.

This is why I can say those other mental modes are not necessarily at all to thinking and conciousness. Because I know that I'm conscious and thinking without them and I still would be without any input from my real senses. But obviously my sensory input have been completely vital to learning.

4

phriot t1_jdxbp71 wrote

I don't think anything is a given during this period. That said, I think the possibility of society becoming even more stratified over this time is very real. Between "living off wealth" and "living off UBI," I know what side I'd prefer to be on. I don't plan on having much more actual cash/cash deposits than I would otherwise, but I absolutely want to own as much of other assets as I can before my own job is disrupted.

2

D_Ethan_Bones t1_jdx930y wrote

Large swaths of us will declare humans non-sentient before they admit a machine is sentient.

Also the term "real AI" is tv-watcher fluff. It's a red flag that someone is not paying attention and instead just throwing whatever stink they can generate in order to pretend they matter somehow. If we wanted Twitter's side of the story we would be looking at Twitter right now.

18

EnomLee t1_jdx85l8 wrote

We’re going to be stuck watching this debate for a long time to come, but as far as I’m concerned, for most people the question of whether LLMs can truly be called Artificial Intelligence misses the point.

It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.

LLMs are capable of completing functions that were previously only solvable by human intellects and their capabilities are rapidly improving. For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.

295