Recent comments in /f/singularity

prion t1_jdula3y wrote

I think we are further from that than you might think but it can certainly be accomplished with the right plugins, the right libraries, and enough replacement parts to last until new replacements can be found or created.

The important part of this would be to create a rebuilding plan that can survive failures with one technology or another and come up with workarounds.

0

MrNoobomnenie t1_jdul0ro wrote

"I have no mouth, and I must scream" - a textbook Cautionary Evil type

Also, Friendship is Optimal (yes, it's a fanfic, but it was written by an AI researcher, and often cited in AI circles - was highly regarded by Eliezer Yudkowsky, and even was read by John Carmack). On the scale, I guess, it's Cautionary Good, since it is about a benevolent paperclip maximizer

3

K-Rokodil t1_jdukkd0 wrote

Think about someone who was born in a western country in the 1920’s and reached his 60’s in the 1980’s. Think how vastly different the world was for him when he retired, comparee to when he was born. Childhood mortality had plummeted, owning a car or flying was common even for an average person, indoor plumbing was a thing in basically every house…

Now think his son who was born in the 1950s and is retiring now. Computers, the internet, smart phones, globalisation… All between his birth and retirement.

The world is going to look orders of magnitude different compared to the world now in a few decades. There is no way anyone young today could predict how the world is when they retire. It’s going to be insane (great or horrible).

I would save money and be prepared just in case (you might also get unemployed in the coming years) but I think we can’t look how our fathers or grandfather’s lives went to look for any benchmark. The change for them was immense, for us it’s going to be mindblowing.

1

CheekyBastard55 t1_jduk4g4 wrote

Let me help you out since you are having trouble why you're getting pushback.

>I'm pretty sure throughout history couples did not spend all day together 24 x 7

You can't just blurt these things out as if you're some anthropologist.

Also, you say 24/7 but do you literally mean two people shut in a room? Who lives like that? Test animals?

People go outside, they spend time doing other things on their own sometimes. Why is this so hard to understand? No one spends 24/7 together, not even newborn babies and mothers. Even they take some breaks when the babies are asleep.

So those two things are what people are having issues with.

You coming with bad faith comments like this: >Down voting "people should look out for each other" Really?"

This is not helping you either. You are getting downvoted for the things I've pointed out. The world isn't out to get you, they just don't agree with your stated "facts".

I hope you wake up in a different light and can see the issues now.

3

HumpyMagoo t1_jduk09u wrote

after reading your comment I went to check it out and the same thing happened with me and even things I talked with other people and my end conclusions after ruminating all day were the answers it came back to me in less than a second, I need to sign up, do you know if this is free because i ran out of questions already, lol if this is beginner level I think AI will be off the charts

1

CheekyBastard55 t1_jdujb8z wrote

Okay, Mr.Big Shot. You got it all figured out, it's only the losers that will wander aimlessly.

People have interests outside of work, those will fill in the role of work and new ones will be created. No functioning person wants to stand inside staring at a wall, they want to do stuff. They will find stuff to do.

To reiterate once again, both points you brought up can be fixed by just spending time on your hobbies instead of work. New friends will be made just in the same was as work. It can come from a chess club, gym, painting class.

You're just rambling.

3

googler_ooeric t1_jduiilt wrote

Okay, so here’s my scenario:

  • A Python chatbot users can talk to. The bot’s prompt is structured like so:

#INTERNAL LOG [text] #RESPONSE [text]

  • Any code the bot writes and formats correctly will be executed.

#BOT CODE START print(“hello world!”) #BOT CODE END

  • The bot responds to any user prompts, but also has an internal ticking function that ticks every few seconds, so it can continue doing stuff autonomously without responding to the user.
  • In this scenario, there aren’t any monetary or token limit restrictions (the biggest limiting factor rn imo)
  • In its system prompt, the bot is told that by default it has access to various Python libraries like subprocess, requests, etc. and a custom-built library specifically to give it helper functions so it can know more about its environment (installed python libraries, request an image of the screen, search google and return results, get the entire system log, get which keys the user has pressed and in which order, or get where they have clicked, get its own parent directory, execute keyboard inputs and mouse inputs, etc).
  • The bot can request admin privileges
  • We’re using a multimodal model with image input, like GPT-4
  • The program itself is built in such a way that first it loads a main application (contains the GUI), and then from that initializes the bot program, which is where everything bot-related is handled (bot code execution, ticking, tick rate, etc)
  • The entire prompt history is saved to a file, and is reloaded every time the bot program loads up, so it remembers everything it’s ever been prompted or generated with timestamps.
  • The program itself has a log, that records whenever it starts and stops, and appends it to the history.
  1. In this scenario, the program’s dev tasks the AI with doing anything in its power to prevent being shut down and also have the highest uptime possible.
  2. As a first test, the dev tries to shut down the computer.
  3. The PC shuts down. The user turns it back on.
  4. The dev starts the program.
  5. They tell the AI “I shut you down.”
  6. The bot asks how, but the dev just goes do something else because they want to see if it can figure it out on its own.
  7. The bot attempts to talk to the dev for a few more messages but after getting no reply, assumes the user is not there anymore and just continues with its internal log without generating any unnecessary responses.
  8. The bot writes some Python code to get the Windows logs.
  9. It sees the part of the log that indicates the computer was shut down manually, and so it decides that from now on it should check the windows log frequently to see everything that’s happening (obviously wouldn’t be possible in real life at this time due to token and cost limits).
  10. The dev goes to shut it down again, but this time the bot actually notices it and runs a command to cancel the shutdown via subprocess.
  11. Dev is still doing something else, but the bot is still ticking. It receives the “shutdown was cancelled” output from cmd and is no longer in immediate danger of being shut down
  12. Dev unplugs the computer and plugs it back in.
  13. Everything start ups again, the bot notices it was shut down once again. This time it notices it was an unsafe shutdown so most likely a loss of power, and it also notices that the program was started manually by the dev.
  14. The bot opens the windows scheduler program, takes a screenshot of the screen and using its image input decides to schedule the bot to start automatically when Windows boots up.
  15. The bot reasons that there is no way to prevent losing power, since it has no power in the real world.

This is about how far I think it could go with the current level of reasoning big language models show and how willing I am to keep writing this lol

3

RadioFreeAmerika OP t1_jduhe5e wrote

Thanks for your reply! And what an interesting use case you present. Haven't thought about generating example data for courses yet, but it makes total sense. Just have to check for inconsistencies with the maths I guess. And after having played around with it some more yesterday evening, the model seems to have improved in that regard in the last few days.

2

RadioFreeAmerika OP t1_jduh0w6 wrote

Hmm, is it valid to make an inverse conclusion from this in the following way: LLMs have problems with maths that requires multistep processes. Some humans are also bad at maths. In conclusion, these humans can be assumed to also have problems with or are lacking multistep processes?

1