Recent comments in /f/singularity

danellender t1_jdyan02 wrote

What I see is not so much an increase in knowledge as much as a different and to my mind far superior experience. I'm more likely to seek information when it's not buried in marketed page rankings or branded portals all providing in some instances the identical phrases.

When the iPhone came out suddenly people's experience with mobile changed. I see that happening right now.

1

_Alasdair t1_jdy9mfd wrote

I built something exactly like this back when GPT3 API came out. Was pretty cool but eventually got bored with it because it couldn't do anything. I tried hooking it up to external apis to get real world live data but by the end everything was so complicated and slow that I gave up.

Hopefully with the GPT4 plugins we can now make something actually useful. It's gonna be awesome.

2

WonderFactory t1_jdy7v00 wrote

We actually don't need AI to develop much beyond where it is at the moment for crazy advances in medicine and technology over the next decade. Just applying ML where it is now to thousands of different applications will lead to crazy breakthroughs. Imagine thousands and thousands of Models like Alpha Fold and what they will bring to scientific advancement. There was a defusion model that can literally read people's minds using brain MRIs posted here yesterday. That's crazy sci-fi stuff already happening. Things are already happening that a year ago I wouldn't have thought would be possible in my lifetime.

13

KerfuffleV2 t1_jdy5tok wrote

> if chatgpt had memory, RAM, a network time clock, and a starting prompt, it would be sentient. So it already is.

I feel like you don't really understand how LLMs work. It's not me in a dark room, it literally doesn't do anything until you feed it a token. So there's nothing to be aware of, it's just a bunch of inert floating point numbers.

But even after you give it a token, it doesn't decide to say something. You basically get back a list of every predefined token with a probability associated with it. So that might just be a large array of 30k-60k floats.

At that point, there are various strategies for picking a token. You can just pick the one that has the highest value from the whole list, you can pick one of the top X items from the list randomly, etc. That part of it involves very simple functions that basically any developer could write without too much trouble.

Now, I'm not an expert but I do know a little more than the average person. I actually just got done implementing a simple one based on the RWKV approach rather than transformers: https://github.com/KerfuffleV2/smolrsrwkv

The first line is the prompt, the rest is from a very small (430M parameter) model:


In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese.

The creatures even fought with each other!

The Tibet researchers are calling the dragons “Manchurian Dragons” because of the overwhelming mass of skulls they found buried in a mountain somewhere in Tibet.

The team discovered that the dragon family is between 80 and 140 in number, of which a little over 50 will ever make it to the top.

Tibet was the home of the “Amitai Brahmans” (c. 3800 BC) until the arrival of Buddhism. These people are the ancestor of the Chinese and Tibetan people.

According to anthropologist John H. Lee, “The Tibetan languages share about a quarter of their vocabulary with the language of the Tibetan Buddhist priests.” [end of text]

3

User1539 t1_jdy4opa wrote

I've been arguing this for a long time.

AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...

People asking if it's really intelligence or even conscious are entirely missing the point.

Non-AGI AI is enough to disrupt our entire world order.

31

User1539 t1_jdy4ig4 wrote

We need real, scientific, definitions.

I've seen people argue we should give ChatGPT 'rights' because it's 'clearly alive'.

I've seen people argue that it's 'no smarter than a toaster' and 'shouldn't be referred to as AI'.

The thing is, without any clear definition of 'Intelligence', or 'consciousness' or anything else, there's no great way to argue that either of them are wrong.

5

Spire_Citron t1_jdy3fly wrote

I don't think his point is unreasonable. There's a difference between an AI being able to figure things out for itself and an AI pulling known information from its database, and we should be clear on that distinction. That's not to say that an AI being able to store and retrieve information and communicate it in different ways isn't useful or impressive, but it's not the same as one that can truly piece together ideas in novel and complex ways and come to its own conclusions. They're both AI, but the implications of the latter would be far more significant.

1

BigZaddyZ3 t1_jdy1xyf wrote

Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.

2

Ok_Sea_6214 t1_jdy1u2x wrote

Another issue is when AI has to limit themselves to human boundaries, like when playing video games: people would complain that AI has an unfair advantage because it can click so much faster, so developers limited the speed, and other "cheating" methods like being able to see the whole map at the same time.

Except clicks per minute is literally what separates the best human gamers from everyone else, and in Warhammer Total War many top gamers look at the whole map at once. It's these almost superhuman abilities that allow them to be so good at the game, yet when AI takes this to the next level it becomes cheating.

3

greatdrams23 t1_jdy192q wrote

Quantum computing is a long way away. You cannot just assume that or any other technology will give what is needed.

Once again. I look for evidence that AGI and singularity will happen, but see no evidence.

It just seems to be assumed singularity will happen, and therefore proof is not necessary.

2