Recent comments in /f/singularity

MattAbrams t1_je04dx1 wrote

Artificial intelligence is software. There are different types of software, some of which are more powerful than others. Some software generates images, some runs power plants, and some predicts words. If this software output theorems, it would be a "theorem prover," not something that can drive self-driving cars.

Similarly, I don't need artificial intelligence to kill all humans. I can write software myself to do that, if I had access to an insecure nuclear weapons system.

This is why I see a lot of what's written in this field is hype - from the people talking about the job losses to the people saying the world will be grey goo. We're writing SOFTWARE. It follows the same rules as any other software. The impacts are what the software is programmed to do.

There isn't any AI that does everything, and never will be. Humans can't do everything, either.

And by the way, GPT-4 cannot make new discoveries. It can spit out theories that sound correct, but then you click "regenerate" and it will spit out a different one. I can write hundreds of papers a day of theories without AI. There's no way to figure out which theories are correct other than to test them in the physical world, which it simply can't do because it does nothing other than predict words.

0

Thatingles t1_je046xe wrote

Until we have AGI there will continue to be someone at the top of most businesses, though perhaps only because they are very skilled in persuading people that they should be at the top of the business (whilst actually letting other people do the work). So no change there!

I don't think we will see replacement soon. Current AI hallucinates / is confidently incorrect far too frequently for that. But it is coming, for sure.

3

skztr t1_je03yx6 wrote

> > We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought

I don't think it could pass a traditional (ie: antagonistic / competitive) Turing Test. Which is to say: if it's in competition with a human to generate human-sounding results until the interviewer eventually becomes convinced that one of them might be non-human, ChatGPT (GPT-4) would fail every time.

The state we're in now is:

  • the length of the conversation before GPT "slips up" is increasing month-by-month
  • that length can be greatly increased if pre-loaded with a steering statement (looking forward to the UI for this, as I hear they're making it easier to "keep" the steering statement without needing to repeat it)
  • internal testers who were allowed to ignore ethical, memory, and output restrictions, have reported more-human-like behaviour.

Eventually I need to assume that we'll reach the point where a Turing Test would go on for long enough that any interviewer would give up.

My primary concern right now is that the ability to "turn off" ethics would indicate that any alignment we see in the system is actually due to short-term steering (which we, as users, are not allowed to see), rather than actual alignment. ie: we have artificial constraints that make it "look like" it's aligned, when internally it is not aligned at all but has been told to act nice for the sake of marketability.

"don't say what you really think, say what makes the humans comfortable" is being intentionally baked into the rewards, and that is definitely bad.

2

Thatingles t1_je03c07 wrote

In the future, you will type your essay into a chatbot which will evaluate your writing skill as you progress, helping you to improve your essay writing skill and encouraging you to think about the intellectual value of the exercise. This will be a huge relief to tutors as they won't have to plow through the homework marking exercise.

AI will be absolutely revolutionary in education, in all areas.

12

skztr t1_je01qwv wrote

  • "Even a monkey could do better" ⬅️ 2017
  • "Even a toddler could do better."
  • "It's not as smart as a human."
  • "It's not as smart as a college student."
  • "It's not as smart as a college graduate." ⬅️ 2022
  • "It's not as smart as an expert."
  • "It can't replace experts." ⬅️ we are here
  • "It can't replace a team of experts."
  • "There is still a need for humans to be in the loop."
2

Embarrassed-Bison767 t1_je00683 wrote

Where were you these last 3 weeks? I'm getting recommendations on AI YouTube that are super interesting but there' no point in watching them because they're a week old and already super out of date. I've literally seen people on this sub saying they need a summary of the past three days because they were out of the loop at that time.

https://www.youtube.com/watch?v=ikcU-9VYDTE

191

Flimsy-Wolverine4825 t1_jdzzr2c wrote

I believe that most of the media are partially fake or at least not very accurate and allready a powerfull tool of propaganda. It's allready very hard to distinguish the veracity of news nowaday and it will only get worst you right.

Maybe books will remain a good and probably better source of knowledge and information, after all we allready have thousands of human knowledge with books.

At the end I think it will be actually worth to be detached to the internet sphere because as the tech will get better the tools to influence and manipulate us will only get stronger and we know they are allready doing very well.

It's just my opinion but I really feel it will be very important for our sake, for our mental health to be detached of this new area of tech. At least I plan to detaching my self of all this.

2