Recent comments in /f/singularity

Gortanian2 OP t1_jdxp2f9 wrote

It’s truly fascinating. And I agree that it is a possible risk. But I don’t think people should start living their lives as if it is an absolute certainty that ASI will solve all their problems within the next couple decades.

My point is that people should consider both possibilities: either the singularity will happen, or it won’t. And there are well thought-out arguments for both sides even if we disagree with them.

7

Special_Freedom_8069 t1_jdxp146 wrote

It depends on your living expenses. It may not be completely relevant in this case, but in the r/financialindependence community they have the so called "4% rule" where you can withdraw 4% per year from your portfolio indefinitely. So, let's say you need $30000 per year, you would need $750000 in your portfolio to not having to work ever again. But if you are just planning to ride out a few years the amount is much smaller of course.

I also foresee that the 4% rule or even the whole FIRE (Financial Independence, Retire Early) movement will die out once UBI arrives, for obvious reasons.

1

Gortanian2 OP t1_jdxofbi wrote

Thank you. I completely agree with all of this. The criticism I’m raising is against a literal singularity event. As in, unbounded recursive self-improvement where we will see ASI with godlike abilities weeks after AGI gets to touch its own brain.

But I agree that AGI is going to change the world in surprising ways.

21

DaCosmicHoop t1_jdxo6bx wrote

Honestly, forget the far future super crazy amazing stuff.

Even if the world in 50 years is only abit better than the world of today, it's still something to be excited about.

Even in the least optimistic scenarios, I'll still be able to get a graphics card better than a 4090 from the toy in a Mcdonald's happy meal.

6

PrivateLudo t1_jdxnc2d wrote

I think society mocks them in some way. Ive worked in those kind of dirty blue collar jobs before and some people straight up told me "when are you going to study and have a real job?" Like…. what the hell???? Im getting paid very well, i dont need to get a "real job". How are those essential jobs not a real job?

Society wouldn’t even be able to function at all without the dirty blue collar jobs. Its just sad that society promotes jobs in finance that dont really contribute anything aside from numbers going up meanwhile plumbers, electricians, mechanics, janitors are seen as bottom of the barrel and low iq works. Theyre the foundations of this society, nothing would work without them.

There’s a big sense of delusion in society where people subconsciously feel superior intellectually because they’re doing a clean office job.

15

ZaxLofful t1_jdxmzk2 wrote

I agree this is great! I have been doing something like this for awhile, just in case.

A local copy of Wikipedia, that is mirrored from their official dump and rebuilt using a different front end.

My masterpiece, is almost wasteful now; when I can just have a LLM spit me out whatever I need.

1

Anjz OP t1_jdxlw8f wrote

No, ChatGPT is closed source and we don't have the weights for it. Plus, it's probably too big to query with consumer GPUs.

Stanford University came up with Alpaca, which is a lighter weight model trained from Facebook's LLaMa but still functionally works as good as earlier iterations of GPT. This one you can run locally given some knowhow.

1

Queue_Bit t1_jdxle5m wrote

Sure, there could be some theoretical wall that stops progress in its tracks. But currently, there is zero reason to believe that a wall like that exists in the near future. Even if AI only improves by a single factor, so 10x, it will STILL absolutely change the world as we know it in drastic ways.

And here's the funny part. Based on research, we KNOW a 10x improvement is guaranteed already. So, I get that you want to slow the hype and want people to think critically, but the truth is that many of us are. And importantly a greater then 10x improvement is almost certainly a guarantee.

Imagine an AI that is JUST as good as humans are at everything. Not better. Just equal. But, with the caveat that this AI can output data at a rate that is unachievable for a human. This much is certain. We will create a general AI that is as good as humans at everything. Once that happens, even if it never gets better, we will live in a world so different than today that it will be unrecognizable.

If you had asked me this time last year if we were going to see a singularity-type event in my lifetime, I would have been unsure, maybe even leaning towards no. But now? If massive societal and economical change doesn't happen by 2030 I will be absolutely shocked. It looks inevitable at this point.

65

simmol t1_jdxlcco wrote

Gary Marcus is wrong on this. There have been already papers published that trains simple machine learning models on publications made before date X and demonstrating that the algorithm can find concepts found in publications after date X. These were not even using LLM but simple Word2Vec abstractions where each of the words in the publications were mapped to vectors and the ML model learned the relationships between the numerical vectors for all papers published before date X.

11

Gortanian2 OP t1_jdxkjna wrote

  1. Very strong counter argument. Love it.

  2. Again, strong, but I would argue that we don’t know where we are in terms of algorithm optimization. We could be very close or very far from perfect.

  3. I would push back and say that the parent doesn’t raise the child alone. The village raises the child. In todays age, children are being raised by the internet. And it could be argued that the village/internet as a collective is a greater “intelligence agent” making a lesser one. Which does bring up the question of how exactly we made it this far.

1