Recent comments in /f/singularity

TitusPullo4 t1_jdxk5df wrote

What’s interesting to me is the shift in perspectives - ten years ago both skynet and the singularity were clearly hypotheses or conspiracy theories, now field leaders aren’t mincing words when they describe them as very real risks.

14

flexaplext t1_jdxjy0l wrote

It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.

There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.

There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.

We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.

1

yaosio t1_jdxjl5r wrote

Bing Chat has a personality. It's very sassy and will get extremely angry if you don't agree with it. They have a censorshipbot that ends the conversation if the user or Bing Chat says anything that remotely seems like disagreement. Interestingly they broke it's ability to self-reflect by doing this. Bing Chat is based on GPT-4. While GPT-4 can self-reflect, Bing Chat can not, and it gets sassy if you tell it to reflect twice. I think this is caused by Bing Chat being finetuned to never admit it's wrong.

1

MultiverseOfSanity t1_jdxjfqo wrote

I remember the AI discussion being based on sci-fi ideas where the consensus was that an AI could, in theory, become sentient and have a soul. Now that AI is getting closer to that, the consensus has shifted to no, they cannot.

It's interesting that it was easier to dream of it when it seemed so far away. Now that it's basically here, it's a different story.

−1

flexaplext t1_jdxi6k0 wrote

Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.

If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.

If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.

I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.

I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.

1

SuperSpaceEye t1_jdxhwi6 wrote

  1. Yeah, moore's law is already ending, but it doesn't really matter for neural networks. Why? As they are massively parallelizable, GPU makers can just stack more cores on a chip (be it by making chips larger, or thicker (3d stacking)) to speedup training further.
  2. True, but we don't know where is that limit, and it just has to be better than humans.
  3. I really doubt it.
2

flexaplext t1_jdxg54v wrote

Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.

You can do AI containment successfully, it's just highly restrictive. 

If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.

I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.

Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.

4

Artanthos t1_jdxg3un wrote

Think about how many jobs could be automated out of existence today by someone proficient with Excel or a well written database.

Think about how long this capability has existed.

Think about the rate at which it has actually taken place.

It’s less about “can AI do this” and more about how long will it take for businesses to adopt and integrate the technology.

17