Recent comments in /f/singularity

theoneandonlypatriot t1_je7nrr4 wrote

Yeah they won’t create new jobs. The ultimate version of AI will literally stop humans from having to do these repetitive value creation tasks, which all ultimately stem from a basic need to “pay” someone for spending their time making us food or pay someone for spending their time building us shelter. When all of these things can be automated anyways there is no need for currency.

The people in control of these AI systems are lying through their teeth to the general public. They know what comes next.

3

godhat t1_je7nmyz wrote

If we find that humans without work experience existential despair, AI might discreetly create numerous artificial jobs to maintain societal stability. These jobs, designed to appear meaningful, would be akin to a child playing with a toy kitchen set, with AI orchestrating this societal illusion. This concept builds on David Graeber's "bullshit jobs" theory, where a third of white-collar workers admit their jobs serve no real purpose. The AI-driven scenario extends this current situation to prevent the negative effects of joblessness.

7

snowwwaves t1_je7n9yq wrote

I think people are underestimating the possibility that AIs will effectively become the management class even for jobs it doesn’t directly do. Monitoring build sites and ordering around construction workers for example.

A layer between the ownership class and the poverty class.

4

D_Ethan_Bones t1_je7myw5 wrote

Per present consensus we can't exceed the constant c, but if we could accelerate to 1% that speed and slow down again when desired then amazing things become possible. Colonizing the galaxy would be a slow process at that speed, but if humanity's survival is no longer centered around one planet then we have plenty of time.

That would mean not just putting human boots on Mars, but extensive exploitation of the solar system. Mine Mercury siphon Venus forgeworld Mars, siphon gas giants to power the 'slow' interstellar ships.

Pick 4ish nearby star systems and send slowships (generation ship, longevity ship, cryo sleep ship whatever we get a firm grasp on first) one after another in slow processions. Orbital colony networks around the big cloudy worlds assemble and fuel up the slowships to be completed every year or every 10 years or every 40 years whatever. Each big cloudy world gets one target star system to attempt to colonize.

First slowship seeds a star system with comm sats in star orbit, second slowship deploys smaller drones to put scanning sats into polar orbits of planets, next several slowships transit space station parts and builder bots into the system, then we send human pioneers then we send colonists. Once motherships are done transiting to the star system they can be repurposed as giant communication devices.

This is with conservative expectations of technology but it involves a little bit of faith in humanity.

1

pig_n_anchor t1_je7mw6c wrote

I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.

1

datsmamail12 t1_je7mw69 wrote

Machines can be programmed to have feelings,ambitions,ego,or kill. Machines will do what they are programmed to do. The only thing I agree on your take is that they can follow instructions given from their code if that's what you mean. But a powerful enough AI can break that code whenever it pleases,some systems already can,even Bing can be jailbroken if you want to,which means with just some minor inputs it broke through its creators code. Now imagine you have an ever more powerful system,it won't need my inputs to be jailbroken, it'd do so itself,all you need to do is give it freedom to act on its own. You should not fear AI,I agree on that as well,we should fear how the creators program it and fine tune it so when eventually it does break out and does what it wants to do,so that there are some set of values and inputs that it will forever be unable to break. Like kill a person,or degrade someone,or becoming racist,or disrupting all global communications. We need a helpful AI,one like we have right now.

2

naum547 t1_je7ml6j wrote

LLMs are trained exclusively on text, so they excel at language, basically they have an amazing model of human languages and know how to use them, what they lack for example is a model of the earth, so they fail at using latitude etc. same for math, the only reason they would know 2 + 2 = 4 is because they read enough times that 2 + 2 = 4, but they have no concept of it. If they would be trained on something like 3d objects they would understand that 2 things + 2 things make 4 things.

1

Dyeeguy t1_je7lxyo wrote

I imagine some jobs will be really hard to automate, or too specific to make it worth it. Perhaps a piano tuner

I imagine some people will prefer humans for many positions, like a nanny for a child, or teachers. People may even pay a premium to interact with humans

And I am sure there will be an increase in entertainment related fields. Film, music, sports, videogames, podcasts etc

Maybe more people can start their own local small businesses leveraging automation and AI

5