Recent comments in /f/singularity
danellender t1_jdyan02 wrote
Reply to comment by BangEnergyFTW in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
What I see is not so much an increase in knowledge as much as a different and to my mind far superior experience. I'm more likely to seek information when it's not buried in marketed page rankings or branded portals all providing in some instances the identical phrases.
When the iPhone came out suddenly people's experience with mobile changed. I see that happening right now.
Iffykindofguy t1_jdyallx wrote
How do you know we dont experience the world in 0s and 1s and our brain just translates it for you? We dont understand our own consciousness, how can we hope to understand another entity?
SkyeandJett t1_jdy9xxi wrote
That's like saying we only experience the world in discrete electrical impulses.
Evan_jansen t1_jdy9xdf wrote
100% agree. I have always found it hard to believe AGI will ever be sentient for this exact reason. How can it go from being a computer to say human, how does it cross this boundary đ¤ˇđź.
_Alasdair t1_jdy9mfd wrote
Reply to J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
I built something exactly like this back when GPT3 API came out. Was pretty cool but eventually got bored with it because it couldn't do anything. I tried hooking it up to external apis to get real world live data but by the end everything was so complicated and slow that I gave up.
Hopefully with the GPT4 plugins we can now make something actually useful. It's gonna be awesome.
reptilot t1_jdy8nas wrote
Reply to comment by HeinrichTheWolf_17 in If you went to college, GPT will come for your job first by blueberryman422
> Needless to say, Truckers, Contractors and Garbagemen may be the last people to lose their jobs.
The irony that Yang campaigned so hard on truckers being first on the chopping block.
kinetsu_hayabusa t1_jdy8era wrote
Reply to comment by bjdkdidhdnd in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were âsuperhumanâ by exstaticj
I did business school and I agree
WonderFactory t1_jdy7v00 wrote
Reply to Singularity is a hypothesis by Gortanian2
We actually don't need AI to develop much beyond where it is at the moment for crazy advances in medicine and technology over the next decade. Just applying ML where it is now to thousands of different applications will lead to crazy breakthroughs. Imagine thousands and thousands of Models like Alpha Fold and what they will bring to scientific advancement. There was a defusion model that can literally read people's minds using brain MRIs posted here yesterday. That's crazy sci-fi stuff already happening. Things are already happening that a year ago I wouldn't have thought would be possible in my lifetime.
PrivateLudo t1_jdy672b wrote
Reply to comment by CrazyShrewboy in If you went to college, GPT will come for your job first by blueberryman422
I understand your point. I just wish people respected those kind of jobs more. Nothing wrong doing those kinds of jobs, not everybody wants to be a flashy businessman, designer or silicon valley techie
KerfuffleV2 t1_jdy5tok wrote
Reply to comment by CrazyShrewboy in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
> if chatgpt had memory, RAM, a network time clock, and a starting prompt, it would be sentient. So it already is.
I feel like you don't really understand how LLMs work. It's not me in a dark room, it literally doesn't do anything until you feed it a token. So there's nothing to be aware of, it's just a bunch of inert floating point numbers.
But even after you give it a token, it doesn't decide to say something. You basically get back a list of every predefined token with a probability associated with it. So that might just be a large array of 30k-60k floats.
At that point, there are various strategies for picking a token. You can just pick the one that has the highest value from the whole list, you can pick one of the top X items from the list randomly, etc. That part of it involves very simple functions that basically any developer could write without too much trouble.
Now, I'm not an expert but I do know a little more than the average person. I actually just got done implementing a simple one based on the RWKV approach rather than transformers: https://github.com/KerfuffleV2/smolrsrwkv
The first line is the prompt, the rest is from a very small (430M parameter) model:
In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese.
The creatures even fought with each other!
The Tibet researchers are calling the dragons âManchurian Dragonsâ because of the overwhelming mass of skulls they found buried in a mountain somewhere in Tibet.
The team discovered that the dragon family is between 80 and 140 in number, of which a little over 50 will ever make it to the top.
Tibet was the home of the âAmitai Brahmansâ (c. 3800 BC) until the arrival of Buddhism. These people are the ancestor of the Chinese and Tibetan people.
According to anthropologist John H. Lee, âThe Tibetan languages share about a quarter of their vocabulary with the language of the Tibetan Buddhist priests.â [end of text]
[deleted] t1_jdy59gx wrote
Reply to Singularity is a hypothesis by Gortanian2
[deleted]
Loud_Clerk_9399 t1_jdy50z6 wrote
Reply to comment by GoodAndBluts in How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
Sorry I meant to say the value of each dollar went up. That was a this statement in my explanation. But thank you for correcting
User1539 t1_jdy4x5l wrote
Reply to comment by Sashinii in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Some people are already on opposing ends of that spectrum. Some people are crying that ChatGPT needs a bill of rights, because we're enslaving it. Others argue it's hardly better than Eliza.
Those two extremes will probably always exist.
GoodAndBluts t1_jdy4vnw wrote
Reply to comment by Loud_Clerk_9399 in How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
that is the opposite of deflation, but I get what you are trying to say
PurpleLatter6601 t1_jdy4sw2 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Lots of humans have trouble thinking clear thoughts or are big fat liars too.
User1539 t1_jdy4opa wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I've been arguing this for a long time.
AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...
People asking if it's really intelligence or even conscious are entirely missing the point.
Non-AGI AI is enough to disrupt our entire world order.
User1539 t1_jdy4ig4 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
We need real, scientific, definitions.
I've seen people argue we should give ChatGPT 'rights' because it's 'clearly alive'.
I've seen people argue that it's 'no smarter than a toaster' and 'shouldn't be referred to as AI'.
The thing is, without any clear definition of 'Intelligence', or 'consciousness' or anything else, there's no great way to argue that either of them are wrong.
beambot t1_jdy49tr wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
If you assume that human collective intelligence scales roughly logarithmicly, you'd only need like 5x Moore's Law doublings (7.5 years) to go from "dumbest human" (we are well past that!) to "more intelligent than all humans ever, combined."
Spire_Citron t1_jdy3fly wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I don't think his point is unreasonable. There's a difference between an AI being able to figure things out for itself and an AI pulling known information from its database, and we should be clear on that distinction. That's not to say that an AI being able to store and retrieve information and communicate it in different ways isn't useful or impressive, but it's not the same as one that can truly piece together ideas in novel and complex ways and come to its own conclusions. They're both AI, but the implications of the latter would be far more significant.
problematikUAV t1_jdy2fur wrote
Reply to comment by BigZaddyZ3 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
âYeah but like Tony stark told you what to doâ
fires
âDONT EVER COMPARE ME TO, UGH now Iâve shot his head off!â
BigZaddyZ3 t1_jdy1xyf wrote
Reply to comment by greatdrams23 in Singularity is a hypothesis by Gortanian2
Depends on what you define as a âlong wayâ I guess. But the question wasnât whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Mooreâs law. There are other aspects that determine how powerful a technology can become besides the size of its chips.
Ok_Sea_6214 t1_jdy1u2x wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Another issue is when AI has to limit themselves to human boundaries, like when playing video games: people would complain that AI has an unfair advantage because it can click so much faster, so developers limited the speed, and other "cheating" methods like being able to see the whole map at the same time.
Except clicks per minute is literally what separates the best human gamers from everyone else, and in Warhammer Total War many top gamers look at the whole map at once. It's these almost superhuman abilities that allow them to be so good at the game, yet when AI takes this to the next level it becomes cheating.
greatdrams23 t1_jdy192q wrote
Reply to comment by BigZaddyZ3 in Singularity is a hypothesis by Gortanian2
Quantum computing is a long way away. You cannot just assume that or any other technology will give what is needed.
Once again. I look for evidence that AGI and singularity will happen, but see no evidence.
It just seems to be assumed singularity will happen, and therefore proof is not necessary.
Yomiel94 t1_jdy0yc3 wrote
Reply to comment by flexaplext in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
How do you intuit mathematical concepts?
[deleted] OP t1_jdyap70 wrote
Reply to comment by SkyeandJett in AGI will only experience the world in 0s and 1s by [deleted]
[deleted]