Recent comments in /f/singularity

Sure_Cicada_4459 t1_jdzyefd wrote

We have different timelines it seems, hence why "you will be fine in the next few decades" which I interpret as "you will be able to do a meaningful economic task in some new niche" seems far fetched to me. My thought process is that the span of tasks that cover this is gigantic and would collapse most meaningful cognitive tasks into busy work. Which includes scientists, education, IT, psychology, roboticist,...

I am not saying we have AGI tomorrow, I am saying we will have AGI faster then any cognitive worker can realistically and sustainably pivot professions or someone can get themselves a degree. Also it is worth pointing out that the cognitive is the bottleneck on the mechanical. Even if we don't take into account that solving cognitive scarcity would mean the optimization problem of constructing efficient, cheap and useful robots is a matter of iteration and prompting, intelligently piloting even a badly designed and limited robot is much easier and yields much more useful applications then for example a dumb AI pilot piloting a hyper advanced fighter jet. Which in turn feedback loops in how permissible and cheap your designs for robots can get and so on.... And that doesn't even take into account the change in monetary incentives as that will attract massively more investment then there is now, breakthroughs and incentive evolve jointly after all.

GPT-4 runs on a big server and yet it still delivers reliable service to million, I don't think this will be a meaningful bottleneck, at least not one that should set your expectations for the next decades as any but "my niche has very limited shelf life and adaptation stretches plausibility instead of willingness or ability."

1

Neo-Geo1839 OP t1_jdzxzy0 wrote

While yes that is the ideal policy, it doesn't seem likely that it will be implemented by schools and they will probably continue to have them write essays, even if AI is a thing. Private schools would probably be lobbying the government to regulate AI and prevent it from writing essays and all of that or something rather than just do what you said.

1

Neo-Geo1839 OP t1_jdzxiig wrote

The thing is, the arguments you just listed completely ignore the political side of things, as AI technology can potentially sway opinions and may just destroy the reputation of a politician even if he didn't do/say the thing the deep fake says he do (or will do). They will become so accurate that you wouldn't be able to tell if it was faked or real. Elections could be decided just by these deep fakes (in the near future). Like, people immediately reacted to a fake Trump arrest image on Twitter, just imagine that in the future.

If there was no political side to this, I would agree, this is not really a new problem. But the fact that there is one just concerns me. This isn't just about the silly little artist mad that an AI can do something better than him. No, this can be used and would inevitably be used by politicians to attack other and divide the populace even further.

−3

SlackerNinja717 t1_jdzx94a wrote

I agree. I enjoy this sub, the articles posted and discussions, but sometimes I lament that discourse is making it seem that the singularity will happen in the next 3-5 years, where a person in their late teens might forsake investing in education or working on building a career because they think a major societal overhaul is around the corner. My personal opinion - the level of automation will hit an inflection point in about 50 years or so - and then our economic system will have to be completely adapted to the new landscape.

2

Exel0n t1_jdzwq6n wrote

there must be differences in bone structure.

if diff races have clear diff in skin structure, fat deposition etc. it must be in the bones too.

the diff races have been seperated for like 10,000 and some even 50,000 years, enough to have differences in bone structure/density on an overall level.

2

Orc_ t1_jdzw6xv wrote

I'm actually a 10 year veteran of that sub and back then we were convinced 2018 and forwards was the beginning of the end... lmao we had all this papers and studies and projections.

None of them meant squat. Outside a black swan event which I believe is still possible civilization ain't collapsing anytime soon.

2

Orc_ t1_jdzvrxj wrote

It cannot, it has nearly 100% credibility but it's plagued by an american accent.

For example I trained it with 1 minute of my voice in spanish speaking spanish and it does english well but it cannot speak spanish well. Same thing would be for the nord women and the accent.

It would work for many other races of TES but not nords, khajit, argonians, for example.

You would have to wait for a separate update that deals with accent and cadence.

0

qepdibpbfessttrud t1_jdzuceg wrote

Maybe. I wouldn't be surprised if users of Wiki were 90%+ satisfied with just AI-chat that was trained on it. ~21GB doesn't allow to run the thing in RAM cheaply yet

I'm not advocating for getting rid of Wiki, amassing and preserving training data will likely be important for whole generations of AI. But I wouldn't also be surprised if some version of GPT would be able to generate better in every possible way version of Wiki that the whole mankind managed so far

1

TotalMegaCool t1_jdztp8b wrote

I am well aware of the capabilities of AI and the likely immergance of AGI in the coming decade. I said a couple of decades because eventually we may have AGI's and robotics that are able to do everything a human can do better, buts its going to take at least two decades.

Even if we developed an AGI tomorrow, it would likely be on a massive server. It will take at least 5 years to be able to deploy something that large on a mobile platform for edge computing. Add to this the fact that robotics are nowhere near as capable or flexible as a human. Its likely going to be another decade before we see humanoid robotics that rival a real human, and another 5 years of building the infrastructure to mass produce them on a scale to replace the human workforce.

Yes, if you have an office job that does not require a robotic body you may be replaced by an AI sooner. But thats the point I was trying to make, If you dont change and adapt you are going to quickly be unemployed. If you do adapt and change with the times, at least for the next couple of decades you are going to be fine. There is too much work that needs to be done that can't be automated by a server based AGI.

The next big growth industry is going to be building the future Utopia and all the automated systems so we can then worry about what we do when we are out the job. Its going to happen over the next couple of decades, if you adapt you can be part of it, if you cant you are going to struggle. Maybe we can support a UBI before we build the automated Utopia, but I would rather be part of building it.

1

JusttryininMR t1_jdzt6qz wrote

IMHO there is a lot of unmitigated bullshit in responses to fears of AI killing entire job classes. The idea that governments, especially in the US, will change policies is ludicrous. There is little to NO chance of UBI being implemented in the next 10 years.

0

User1539 t1_jdzsxbk wrote

My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'

My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.

Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.

1