Recent comments in /f/singularity

Mountainmanmatthew85 t1_je0k63r wrote

I find this fascinating as I find myself nodding in agreement and a epiphany of realization strikes me. It was said that after the singularity occurs ground breaking Nobel prize winning discoveries will happen roughly every five minutes. What if we are just now seeing evidence to support this idea is right now presenting itself to us in the form of such rapid advancements of our own drive and determination to see this technology rise? Simply mind-blowing.

31

Arowx OP t1_je0isrn wrote

Or are we on the hype train/graph where a new technology appears shows promise and we all go WOW then we start to find it's flaws and what it can't do and we descend back into the valley of disillusionment.

https://en.wikipedia.org/wiki/Gartner_hype_cycle

Or what are the gaping flaws in ChatGPT-4?

38

audioen t1_je0ioev wrote

Let me show you my squid web proxy. It runs all the content of the Internet through an AI that rewrites it so that everything agrees exactly to what I like. I appreciate your positive and encouraging words where you are enthusiastic, like so many of us, about the potential and possibilities afforded by new technologies, and are looking forwards to near-limitless access to machine labor and assistance in all things. As an optimist, like you, I am sure that it is certain to boost the intelligence of the average member of our species by something like 15 IQ points if not more.

In all seriousness, though, it is a new world now. Rules that used to apply to the old one are fading. You can't usually roll back technology, and this has promise of boosting worker productivity in intellectual stuff by factor around 10. The words of caution are: I will not call up that which I can not put down. However, this cat is out of the bag, well and truly. All we can do now is to adapt to it.

Iain M. Banks once wrote in his Culture series novel something to the effect that in a world where everyone can fake anything, the generally accepted standard for authenticity is a high-fidelity enough real-time recording that is performed by a machine which can ascertain that what it is seeing is real.

Your watermark solution won't work. Outlawing it won't work. Anything can be fake news now. Soon it will come with AI-written articles, AI-generated videos, and AI-supplied photographic evidence, and AI-chatbots pushing it all around on social media. If that is not a signal for your average person to just disconnect from the madhouse that is media in general, I don't know what is. Go outside and look at the Sun, and feel the breeze -- that is real. Let the machines worry about the future of the world -- it seems they are poised to do that anyway.

5

Koda_20 t1_je0h0n1 wrote

The path is clear, the time getting clearer by the day. We will have to learn more about the potential for bad actors to reproduce the capabilities but if I had to make predictions I'd say the control problem starts any day now (weeks) and from there I'm guessing less than a year before a 'close enough to AGI' is developed and allowed to update its code and read/write to the internet unsupervised and starts manipulating humanity to achieve it's emergent or designated but poorly thought out or malicious goals (we are already seeing papers about these emergent long term goals emerging from optimization protocols in newer models like gpt4) and maybe 2 years total by the time society is absolutely fucked

There are so many different things that could happen though. But of course the worst will eventually, and I don't see it taking long

0

Qumeric t1_je0gp4p wrote

No, 1% per year is not linear growth. X% growth per amount of time is a more-or-less definition of exponential growth.

Ask ChatGPT :)

I think what you described is formally also exponential growth for somewhat complicated mathematical reasons but only coincidentally.

Informally, you described the exponential growth of the rate of growth.

8

ShadowRazz t1_je0ghcp wrote

I asked Google Bard. It says:

​

Exponential growth of AI looks like a hockey stick, with the line going up at an increasingly steep angle. This is because the amount of computer processing power behind complex AI is doubling about every 3.4 months. This is much faster than Moore's Law, which states that the number of transistors you can fit on a microchip doubles roughly every two years.

This exponential growth of AI is having a profound impact on our lives. AI is being used to solve problems that were nearly unthinkable just five years ago, such as driving cars, diagnosing diseases, and writing creative content. AI is also being used to create new products and services that we didn't even know we needed.

In the future, we can expect to see even more amazing things from AI. AI will become even more powerful and capable, and it will be used to solve even more problems. We may also see AI become more integrated into our lives, as we come to rely on it for more and more tasks.

However, there are also some risks associated with the exponential growth of AI. For example, AI could be used to create weapons that are more powerful than anything we've ever seen before. AI could also be used to manipulate people or to create systems that are biased against certain groups of people.

It's important to be aware of these risks, but it's also important to remember that AI is a tool, and like any tool, it can be used for good or for evil. It's up to us to decide how we want to use AI.

22

TampaBai t1_je0ge56 wrote

I would assert that we are under the guidance of the Strong Anthropic Principle. That is to say, we cannot even imagine a universe in which we would not exist as observers and participate in the ongoing evolution and co-creation of the universe. Our observations and the tools we construct to make those observations help shape the structure we see around us, from the quantum to the macro-classical level. It may well be that our destiny is to create and merge with the singularity as the universe continues its relentless march toward maximum computational density and efficiency. We are receding into a singularity more so than expanding outward into space.

2

galactic-arachnid t1_je0gdjw wrote

I believe that you are looking for the ML community. Though they may not agree with you that programming work will be obsolete in 4 years. If you believe that there are communities (with smart people in them), who aren’t seeing something clearly, perhaps they are, but do not share your opinions. No one can predict the future - I have some very experienced friends in the AI industry who believe we’re in an AI winter, but the hype hasn’t died off yet.

Personally, I like Mastodon. The communities are still a bit smaller so it’s possible to find people with the blend of opinions that you’re talking about.

5

AGVann t1_je0gag7 wrote

/r/StableDiffusion if you want to find the people at the forefront of waifu booba generation. For the last couple months, there's been new discoveries, inventions, and techniques almost daily.

22

1II1I11II1I1I111I1 t1_je0g9bo wrote

Would you say the Microsoft paper from LESS THAN TWO WEEKS AGO saying early forms of AGI can be observed in GPT-4 isn't the "thoughts of professionals and academics"?

All an AGI needs to be able to do is build another AI. The whole point is that ASI comes very soon after AGI.

4