Recent comments in /f/singularity
1BannedAgain t1_jdzy9uo wrote
Reply to Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
Half the decisions made at large organizations are wrong. There’s plenty of literature on c-suite decision-failures. This means there is room for improvement
Neo-Geo1839 OP t1_jdzxzy0 wrote
Reply to comment by Surur in How we will we be able to distinguish AI-made from Human-made? by Neo-Geo1839
While yes that is the ideal policy, it doesn't seem likely that it will be implemented by schools and they will probably continue to have them write essays, even if AI is a thing. Private schools would probably be lobbying the government to regulate AI and prevent it from writing essays and all of that or something rather than just do what you said.
[deleted] t1_jdzxorb wrote
[removed]
Neo-Geo1839 OP t1_jdzxiig wrote
Reply to comment by Tiamatium in How we will we be able to distinguish AI-made from Human-made? by Neo-Geo1839
The thing is, the arguments you just listed completely ignore the political side of things, as AI technology can potentially sway opinions and may just destroy the reputation of a politician even if he didn't do/say the thing the deep fake says he do (or will do). They will become so accurate that you wouldn't be able to tell if it was faked or real. Elections could be decided just by these deep fakes (in the near future). Like, people immediately reacted to a fake Trump arrest image on Twitter, just imagine that in the future.
If there was no political side to this, I would agree, this is not really a new problem. But the fact that there is one just concerns me. This isn't just about the silly little artist mad that an AI can do something better than him. No, this can be used and would inevitably be used by politicians to attack other and divide the populace even further.
SlackerNinja717 t1_jdzx94a wrote
Reply to Singularity is a hypothesis by Gortanian2
I agree. I enjoy this sub, the articles posted and discussions, but sometimes I lament that discourse is making it seem that the singularity will happen in the next 3-5 years, where a person in their late teens might forsake investing in education or working on building a career because they think a major societal overhaul is around the corner. My personal opinion - the level of automation will hit an inflection point in about 50 years or so - and then our economic system will have to be completely adapted to the new landscape.
YaAbsolyutnoNikto t1_jdzx3n4 wrote
Reply to comment by NanditoPapa in Singularity is a hypothesis by Gortanian2
Wait, angels? Aren’t Americans protestant? Aren’t angels and saints a catholic thing?
Yes, I’m completely ignorant on this topic.
Exel0n t1_jdzwq6n wrote
Reply to comment by audioen in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
there must be differences in bone structure.
if diff races have clear diff in skin structure, fat deposition etc. it must be in the bones too.
the diff races have been seperated for like 10,000 and some even 50,000 years, enough to have differences in bone structure/density on an overall level.
Grand_Milk63 t1_jdzwoss wrote
Reply to comment by HeinrichTheWolf_17 in If you went to college, GPT will come for your job first by blueberryman422
I don’t know. Self driving vehicles will probably be here eventually. My garbage men don’t get out of the truck unless they really need to and usually they leave the items not in a standard can.
I live in suburbs so
Orc_ t1_jdzw6xv wrote
Reply to comment by banuk_sickness_eater in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
I'm actually a 10 year veteran of that sub and back then we were convinced 2018 and forwards was the beginning of the end... lmao we had all this papers and studies and projections.
None of them meant squat. Outside a black swan event which I believe is still possible civilization ain't collapsing anytime soon.
fluffy_assassins OP t1_jdzw1wd wrote
Reply to comment by Primo2000 in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
The only way an AI won't make decisions better than a human is if somehow we stop it from making the decisions at all, and eventually we won't even be able to do that.
Orc_ t1_jdzw0j1 wrote
Reply to comment by Bakagami- in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Be grateful and be quiet.
SkyeandJett t1_jdzvx84 wrote
Reply to comment by Surur in How we will we be able to distinguish AI-made from Human-made? by Neo-Geo1839
Even if you stopped AI advancement right now at this exact moment in time the traditional classroom instruction model is completely fucked. You'd be much better off using GPT as a one on one tutor.
sideways t1_jdzvvus wrote
AI made will be faster, cheaper and higher quality.
Roubbes t1_jdzvst8 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I actually like the inferring scientific principles as a benchmark
Orc_ t1_jdzvrxj wrote
Reply to comment by banuk_sickness_eater in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
It cannot, it has nearly 100% credibility but it's plagued by an american accent.
For example I trained it with 1 minute of my voice in spanish speaking spanish and it does english well but it cannot speak spanish well. Same thing would be for the nord women and the accent.
It would work for many other races of TES but not nords, khajit, argonians, for example.
You would have to wait for a separate update that deals with accent and cadence.
Orc_ t1_jdzv03x wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Some naysayers won't admit anything and continue their forced cynicism up until a robot is holding them at Phased Plasma Pulse-Gun point
Primo2000 t1_jdzug6k wrote
Reply to comment by fluffy_assassins in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
I suspect this is just publicity stunt
qepdibpbfessttrud t1_jdzuceg wrote
Reply to comment by Ambiwlans in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Maybe. I wouldn't be surprised if users of Wiki were 90%+ satisfied with just AI-chat that was trained on it. ~21GB doesn't allow to run the thing in RAM cheaply yet
I'm not advocating for getting rid of Wiki, amassing and preserving training data will likely be important for whole generations of AI. But I wouldn't also be surprised if some version of GPT would be able to generate better in every possible way version of Wiki that the whole mankind managed so far
Hackerjurassicpark t1_jdzu4j3 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Who put this guy in charge of defining what’s intelligence?
TotalMegaCool t1_jdztp8b wrote
Reply to comment by Sure_Cicada_4459 in If you went to college, GPT will come for your job first by blueberryman422
I am well aware of the capabilities of AI and the likely immergance of AGI in the coming decade. I said a couple of decades because eventually we may have AGI's and robotics that are able to do everything a human can do better, buts its going to take at least two decades.
Even if we developed an AGI tomorrow, it would likely be on a massive server. It will take at least 5 years to be able to deploy something that large on a mobile platform for edge computing. Add to this the fact that robotics are nowhere near as capable or flexible as a human. Its likely going to be another decade before we see humanoid robotics that rival a real human, and another 5 years of building the infrastructure to mass produce them on a scale to replace the human workforce.
Yes, if you have an office job that does not require a robotic body you may be replaced by an AI sooner. But thats the point I was trying to make, If you dont change and adapt you are going to quickly be unemployed. If you do adapt and change with the times, at least for the next couple of decades you are going to be fine. There is too much work that needs to be done that can't be automated by a server based AGI.
The next big growth industry is going to be building the future Utopia and all the automated systems so we can then worry about what we do when we are out the job. Its going to happen over the next couple of decades, if you adapt you can be part of it, if you cant you are going to struggle. Maybe we can support a UBI before we build the automated Utopia, but I would rather be part of building it.
fluffy_assassins OP t1_jdzt800 wrote
JusttryininMR t1_jdzt6qz wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
IMHO there is a lot of unmitigated bullshit in responses to fears of AI killing entire job classes. The idea that governments, especially in the US, will change policies is ludicrous. There is little to NO chance of UBI being implemented in the next 10 years.
Durabys t1_jdzt6eg wrote
Reply to comment by acutelychronicpanic in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Already happened with DABUS AI... and they proceeded to move the goalposts.
User1539 t1_jdzsxbk wrote
Reply to comment by Shiningc in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'
My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.
Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.
Sure_Cicada_4459 t1_jdzyefd wrote
Reply to comment by TotalMegaCool in If you went to college, GPT will come for your job first by blueberryman422
We have different timelines it seems, hence why "you will be fine in the next few decades" which I interpret as "you will be able to do a meaningful economic task in some new niche" seems far fetched to me. My thought process is that the span of tasks that cover this is gigantic and would collapse most meaningful cognitive tasks into busy work. Which includes scientists, education, IT, psychology, roboticist,...
I am not saying we have AGI tomorrow, I am saying we will have AGI faster then any cognitive worker can realistically and sustainably pivot professions or someone can get themselves a degree. Also it is worth pointing out that the cognitive is the bottleneck on the mechanical. Even if we don't take into account that solving cognitive scarcity would mean the optimization problem of constructing efficient, cheap and useful robots is a matter of iteration and prompting, intelligently piloting even a badly designed and limited robot is much easier and yields much more useful applications then for example a dumb AI pilot piloting a hyper advanced fighter jet. Which in turn feedback loops in how permissible and cheap your designs for robots can get and so on.... And that doesn't even take into account the change in monetary incentives as that will attract massively more investment then there is now, breakthroughs and incentive evolve jointly after all.
GPT-4 runs on a big server and yet it still delivers reliable service to million, I don't think this will be a meaningful bottleneck, at least not one that should set your expectations for the next decades as any but "my niche has very limited shelf life and adaptation stretches plausibility instead of willingness or ability."