Recent comments in /f/singularity

agorathird t1_jebwpvf wrote

There's consideration from the people working on these machines. The outsiders and theorists who whine all day saying otherwise are delusional. Not to mention the armchair 'alignment experts'

Also, we live in a capitalist society. You can frame anything as the capitalist approach but I don't think doing so in this sense is applicable to its core.

Let's say we get a total 6 month pause (somehow) and then a decade pause because no amount of reasonable discussion will make sealions happy. Good now we get to fight climate change with spoons and sticks.

−3

DragonForg t1_jebwcr6 wrote

I just disagree with the premise, AI is inevitable whether we like it or not.

If we stop it entirely we will likely die from climate change. If we keep it going it has the potential to save us all.

Additionally how is it possible to predict something that is smarter then us. The very fact something is computationally irreducible means that it is essentially impossible to understand how it works other than dumbing it down to our levels.

So we either take the leap of faith with the biggest rewards as well as biggest risk possible. Or we die a slow painful and hot death with climate change.

1

ReignOfKaos t1_jebvoyd wrote

Reply to comment by Mortal-Region in GPT characters in games by YearZero

In theory, reinforcement learning could solve this, but specifying the reward function is very hard and if you need human evaluation then it is very slow. I think it’s not quite as theoretical of a solution as evolutionary algorithms, as it’s been successfully applied to create very capable game playing agents before, but I think it’s very difficult to beat behavior trees or simple utility systems when it comes to creating characters that are fun to play with.

2

Smellz_Of_Elderberry t1_jebv7tu wrote

We don't need to slow down, we need to speed up. Governments are already going to massively hinder progress without the help of petition... They want time to get ahead of it, so the average person doesn't suddenly start automating away the government jobs, with unbiased and incorruptible ai agents..

9

TFenrir t1_jebuahq wrote

I think it's a hard question to answer, because many factors can go into layoffs - and after layoffs it's very common for companies to not hire back similar roles but replace tasks with software. That doesn't even get into the culture of layoffs - some companies just don't like doing it, and you'll hear stories about people who go into work all day and play Minesweeper or whatever.

That being said, I think we'll see the first potentially significant disruption when Google and Microsoft release their office AI suite.

I know people whose entire job is to make PowerPoint/Slides. When someone can say "turn this email chain into a proposal doc" -> "turn this proposal doc into a really nice looking set of slides, with animations and a cool dark theme" - that's going to be very disruptive.

66

Low-Restaurant3504 t1_jebu62l wrote

The ideal scenario is to have enough time to get the populace to accept and treat the idea that success is not tied into a financial or external incentive but is found in contentment and creative exploration. You don't have to get it to be accepted wholesale, but just float it as a viable point of view. That would make a lot of the transition much easier.

That's... probably not in the cards, however.

1

alexiuss t1_jebu2hm wrote

You're acting like the kid here, I'm almost 40.

They're not the greatest minds if they don't understand how LLMs work with probability mathematics and connections between words.

I showed you my evidence, it's permanent alignment of an LLM using external code. This LLM design isn't limited by 4k tokens per conversation either, it has long term memory.

Code like this is going to get implemented into every open source LLM very soon.

Personal assistant AIs aligned to user needs are already here and if you're too blind to see it I feel sorry for you dude.

1