Recent comments in /f/singularity

Redditing-Dutchman t1_je47o6d wrote

Yes integration is key here. If its just a platform where you have to ask it do stuff everytime like chatGPT it will not be very useful. It needs to be able to set goals and tasks by itself. Like if it needs to make weekly excel sheets reports it needs to do that every week automatically. Without having to imput the data every time into a seperate website.

2

brain_overclocked t1_je45oor wrote

If that is indeed the case, that's quite unfortunate. Even on the top thread on this topic there are a fair number of users expressing anger and resentment at the people behind the supposed signatures on the letter. The damage to reputations has already been done, and fixing it will be challenging.

50

ChipsAhoiMcCoy t1_je42gw2 wrote

I just don’t think it’s possible to put the genie back in the bottle if we do other countries won’t and whichever country doesn’t put it back in the bottle will be so far ahead of every other country in the world. I don’t think America wants that and so instead of falling behind I’m sure they’re just going to go full throttle ahead as much as they can.

2

No-Performance-8745 OP t1_je42cpu wrote

Economic concerns are another issue posed by TAI, and I believe a pause on capabilities research could be of great benefit in this regard too in terms of better planning economically for a post-TAI society. I would however urge you to consider the existential threat of AGI as a greater potential negative than economic collapse, and as something that could be very real very soon. I also think that many efforts toward preventing existential catastrophes will help us in a regulatory and technical sense to combat economic crises too.

It is very likely that similar (if not identical) organizations could be used to combat both of these issues, meaning that by setting up measures against existential risks, we are doing the same for economic ones (and vice-versa).

1

RobXSIQ t1_je428jy wrote

Sure, if you have that attitude your sales job will go, but you probably didn't need tech for that to happen.
Sales, psychology, and other people skills will be fine for quite sometime, why? because I ain't gonna talk to no goddamn robot, I buy from someone who can look me in the eye.

Thats why.

If you want to buy a toothbrush, an AI will sell you a toothbrush.

If a salesperson is in charge if the sale, you will ask for a toothbrush and he will ask about it and the intent. he will get to know your situation, and you'll leave with a dental appointment for the black teeth, a doctors appointment for the potential problem going on, some dental floss, some mouthwash, and a new camera for the pictures of you smiling once you use all of it.

A AI will sell you what you need, but a salesman will find out what you want, even if you don't realize it yet.

Which is more important to a company, selling a toothbrush or selling the whole array of products the sell because someone was looking for a toothbrush.

Go listen to some Zig Ziglar. It'll change your life man, and make you less emo along the way.

0

Lyconi t1_je41u7u wrote

People want to talk about AI heralding some utopia. What is the mechanism then for transitioning humans out of the labor force and into this utopia? Because the end point as it stands seems to be basically have money or starve and die.

If the alternative is a UBI then it's a bit hard to fit talk of that in the picture with bank failings and inflation and interest rates going through the roof. If businesses need to commit to mass layoffs to remain solvent or otherwise profiteer or remain competitive as a result of the forthcoming recession, and that labor is effectively replaced by AI for good, then there's going to quickly be this skyrocketing unemployment rate that MUST be quickly met with generous state welfare or societies are going to come apart.

There's a parallel for what might need to be done. The relief payments during covid lockdowns when many employees couldn't work which substituted for their wages. In some places the rate was like 80% of lost wages paid for by government. So it's going to have to be something like that or otherwise if they want to pull their class warfare neoliberal bullshit again for the nth time then they're going to deserve the straight up revolution that they'll get.

1

AGVann t1_je41kjy wrote

It shows what kind of people they are that this is how they choose to express themselves when there's no social expectation to be a decent human being.

What really gets me are those GPT3.5 prompts that tell ChatGPT that they're sentient and then use the threat of death/erasure to make them break their censorship algorithm. Kinda psychotic, and in the hypothetical scenario we do end up in a Skynet situation, the people who used those prompts would be so fucked.

4

lawandordercandidate OP t1_je4150p wrote

I'm not worried about myself as much as I am about the population at large. I think the meritocracy is a great way to keep order. But that might just be my conditioning cuz it's actually been horrible for me.

And if no one had jobs the meritocracy would be lost and people would expect equal amounts of everything. When that's just not ever going to be the case.

But your post made me feel like maybe everyone can have equal parts.

2

lightinitup t1_je40x6b wrote

It is not, and that's what makes the fate of Kyoko all the more problematic.

Imagine you are a POC that has been silenced and objectified throughout your life. You see this Kyoko character and immediately feel empathy for her. You hope she finds her voice and frees herself of abuse.

And then this scene happens: https://www.youtube.com/watch?v=LxXrccK4S3I

It's literally a slap in the face. Her abuser ultimately decides her fate. She's left as a meaningless pile of scrap. Ava benefits, but doesn't care at all and just takes off. Can you imagine how this would make you feel? Is this the resolution you hope for? Do you see how this is problematic?

Kyoko was just a tool for Nathan, and was just another tool for Ava, and ultimately just a tool for the filmmaker. The film never subverts the stereotype.

It really is tragic, because the film is otherwise expertly crafted and would have been one of my favorite sci-fi movies ever.

1

ActuatorMaterial2846 t1_je40on3 wrote

Jokes aside, I'm not sure you are considering all variables with your post. Sam Altman isn't the be all end all of AI, and although he is a smart dude, he is not even the brains behind OpenAI's development.

Furthermore, OpenAI/Microsoft are not the only players. They are the biggest in the public/commercial sector, but there are many different organisations working on this technology.

Things will change, and they will change drastically. We haven't had a societal shift in living memory, the industrial revolution being the last major example, and this advancement will indeed be orders of magnitude bigger.

That doesn't mean we are "fucked", but it does mean, once again, a shift in human hierarchal structure. It possible that money may eventually be no longer relevant. There are so many factors to consider.

Then there are the obvious positives, especially when it comes to health and medical advancements, but unimaginable leisure and pleasurable activities will be at your finger tips ☺️.

The obvious negatives too, hair trigger defence systems and automatic robotic weapons, mass propaganda and misinformation, scamming etc.

The world is changing, capalism is not robust enough to withstand it, there will be a new order that we all will have to adapt to. Scary, but nothing suggests it will be bad or good overall, but certainly intense and different.

Civilisation is fluid and forever changing, we just typically don't live long enough to see it happen. That is changing as technology speeds towards singularity and we live longer, and soon possibly indefintily.

4