Recent comments in /f/singularity
[deleted] t1_jdt7noy wrote
Reply to comment by JDP87 in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
[deleted]
JDP87 t1_jdt6uhc wrote
Reply to comment by kevinzvilt in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
At least you're getting an answer.
Working on that. Something went wrong. Please try again.
[deleted] t1_jdt6d7q wrote
Reply to comment by TFenrir in Story Compass of AI in Pop Culture by roomjosh
[deleted]
Iffykindofguy t1_jdt5yfa wrote
Reply to comment by Apart_Supermarket441 in How are you viewing the prospect of retirement in the age of AI? by Veleric
This hinges on the myth that manual labor wont be impacted by this. It will be.
TFenrir t1_jdt5sir wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
In what way was Transcendence about an evil AI?
Iffykindofguy t1_jdt5rtq wrote
Reply to comment by PaperbackBuddha in How are you viewing the prospect of retirement in the age of AI? by Veleric
Republicans. You can say them by name. It will be the republicans. Vote against them.
Ortus14 t1_jdt5myg wrote
UBI is retirement. Surviving the transition is the challenge.
I expect post scarcity (enough UBI for all of us to live well enough) to occur sometime between twenty and fifty years from now.
eggsnomellettes t1_jdt5dxl wrote
Reply to comment by itsnotlupus in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
They're using elevenlabs, which isn't local and hence a slow API call
liqui_date_me t1_jdt531o wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
You would think that GPT would have discovered a general purpose way to multiply numbers, but it really hasn’t, and it isn’t accurate even with chain-of-thought prompting.
I just asked GPT4 to solve this: 87176363 times 198364
The right answer should be 17292652070132 according to wolfram alpha.
According to GPT4 the answer is 17,309,868,626,012.
This is the prompt I used:
What is 87176363 times 198364? Think of the problem step by step and give me an exact answer.
TheMadGraveWoman t1_jdt4xqo wrote
Reply to comment by SupportstheOP in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I do not feel like superintelligence. Why I can’t do super fast calculations?
antipod t1_jdt4pju wrote
Reply to comment by valdocs_user in How are you viewing the prospect of retirement in the age of AI? by Veleric
Exactly. WFH vs return to work continues to be a battle for a lot of organizations.
zero_for_effort t1_jdt4nz1 wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
This is the explanation I found easiest to understand, cheers.
roomjosh OP t1_jdt4fu1 wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
I wanted to try to plot the stories involving AI this morning. All suggestions welcome.
liqui_date_me t1_jdt48m5 wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
You would think that GPT would have discovered a general purpose way to multiply numbers, but it really hasn’t, and it isn’t accurate even with chain-of-thought prompting.
I just asked GPT4 to solve this: 87176363 times 198364
The right answer should be 17292652070132 according to wolfram alpha.
According to GPT4 the answer is 17,309,868,626,012.
This is the prompt I used:
What is 87176363 times 198364? Think of the problem step by step and give me an exact answer.
SkyeandJett t1_jdt2zli wrote
Reply to comment by moonpumper in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
That was my thought. No more phone. Just the smart watch.
Ok_Tip5082 t1_jdt2kts wrote
Reply to comment by Ashamed-Asparagus-93 in How are you viewing the prospect of retirement in the age of AI? by Veleric
Your example is orders of magnitude less likely than the carrington event or the recent one which was 100x more powerful but luckily pointed away from earth
itsnotlupus t1_jdt2igm wrote
Reply to comment by sumane12 in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
The model text output is(/can be) a stream, so it ought to be possible to pipe that text stream into a warmed up TTS system and start getting audio before the text is fully generated.
[deleted] t1_jdt2i8n wrote
Reply to comment by DragonForg in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Because machines don’t speak to humans through 1s and 0s? C’mon.
[deleted] t1_jdt2eug wrote
Reply to comment by Neurogence in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
The AI community isn’t going to get to AGI without the financial backing the non-AI community. In that context it makes more sense to deploy a commercially successful LLM.
itsnotlupus t1_jdt280v wrote
Reply to comment by illathon in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
Whisper is the speech recognition component.
I don't think he said what he's using for TTS, might be MacOS' builtin thingy.
ArcticWinterZzZ t1_jdt1h3m wrote
Reply to comment by masonw32 in Why is maths so hard for LLMs? by RadioFreeAmerika
Yes, but we are interested in its general purpose multiplication abilities. If it remembers the results, that's nice, but we can't expect it to do that for every single pair of numbers. And then, what about multiplication with 3 factors? We should start thinking of ways around this limitation.
ArcticWinterZzZ t1_jdt10ie wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Yes, it can probably be done. How? I don't know. Maybe some kind of neural loopback structure that runs layers until it's "done". No idea how this would really work.
ArcticWinterZzZ t1_jdt0urg wrote
Reply to comment by Ok_Faithlessness4197 in Why is maths so hard for LLMs? by RadioFreeAmerika
I don't think that's impossible to add. You are right: chain of thought prompting circumvents this issue. I am specifically referring to "mental math" multiplication, which GPT-4 will often attempt.
All_the_questions2 t1_jdt7ptx wrote
Reply to How are you viewing the prospect of retirement in the age of AI? by Veleric
We’ll all be on universal basic income. I think there is a post in openai where Sam Altman is talking about our utopian future and his examples of how and why it’s gonna be so great are “increased material wealth” however in openai website they state specifically that one of the major perks for being a non-profit is they can go ahead and test drive universal basic if they want, so what he’s really saying is, “we’re about to be stupid loaded” then he lists the benefits for normies “cured disease, status, drama, create and we’re gonna help find ways to do that” he also plugs something about “feeling useful” so the implication being no useful. I’ve already hashed this with chatgpt basically the system and the user will be almost indistinguishable except for the needed input to create better products and services for the user.
So ya know, you can retire but our uptime is looking like 100%. They better make me really comfy and my experience better be Bliss. I will take the blue pill ok, but only if it’s amazing… so I’m counting on my elite overlords to deliver