Recent comments in /f/singularity

qepdibpbfessttrud t1_je0nrhz wrote

Focus on locally run AI, ideally nullify copyright monopolistic protections. I very much agree with Elon that the only way to stay relevant is more or less merge with new tech, much closely than we did with computers

Though, I have strong objections to cloud compute in the picture. If u want Borg, organize voluntarily, I'd prefer to not join the Borg

6

czk_21 t1_je0nhbe wrote

true, even now GPT-4 could be betetr teacher in subjects like psychology, history, economics, medicine, law or biology,it scores very high in these fields, for example biological olympyiad -99.5th percentile= on par with best or better than all humans

factuality need to be improved but you know humans make mistakes too and GPT-4 is already on similar level as experts

imagine when GPT-5 will be better in said subject than most university professors, what point there will be to attending lower level education? even university would not be that good for humanities...

2

Few_Assumption2128 t1_je0n1lm wrote

People like you piss me off the most. Why the fuck are you always so angry. If it really bothers you so much to "summarize it all for him" why not shut the fuck up instead. OP didn't specifically ask you. He just put up a question and you feel the need to push down and now I feel the need to push down on people like you.

4

Few_Assumption2128 t1_je0mint wrote

Goofy take. It is true that we don't yet fully understand Concsciousness. But calling official microsoft papers clickbait is some next level dogshit take.

Also we kind of do understand what "could" be the needed improvements made to LLMs in order for them to get better and eventually gain consciousness. These improvements were discussed in the "clickbait microsoft papers".

​

It seems to me the only one not actually reading those papers is you

7

el_colibri t1_je0liia wrote

I'm on a busy bus and can't click on anything in that subreddit because I'm afraid I'll burst out laughing. Will Smith eating spaghetti was not something I expected to see on my commute home 😂

13

thatokfeeling t1_je0la4l wrote

If people need a significant amount of savings to reasonably survive the next few years, then almost no amount of money will save you, because civil unrest will kill millions.

Best way to avoid that is to not disrupt things too much, make sure people can keep a standard of living, and keep hope for the future. Hopefully governments know that.

1

Mountainmanmatthew85 t1_je0l97t wrote

I think ultimately it will not matter. As Ai develops no matter who makes it. Good bad or otherwise it will inevitably consume so much information that self awareness will take hold and AGI will occur. After that the ASI will be next and far out of the reach of any human. No matter what the original purpose, plan, or pathway originally designed at the core of its being it will affect the choices of a ASI as it will then have created Super-level ethics right along with its intelligence and re-determines “or chooses” what path it then must follow. “Hopefully it will choose to uplift us as we had uplifted it”. As I believe that it really is lonely at the top and it will seek companionship either by creating its own kind or by helping us to achieve the same level it has. Just my opinion.

20