Recent comments in /f/singularity

No_Ninja3309_NoNoYes t1_je1at2h wrote

  1. Education

  2. Prompt engineers, AI babysitters

  3. All of them

  4. We can but they won't take it seriously.

I only know about Andrew Yang. Life without a job is not something that most people are actively preparing for. Society can handle short lived famine, but I'm more worried about riots and civil unrest. Andrew Yang thinks that UBI is possible. We'll have to wait and see...

3

bullettrain1 t1_je1aiem wrote

My point is the livelihood of CEOs are not threatened by AI, opposed to everyone else. To your point - they will use it as a tool. That’s my issue with people saying “oh it will replace CEOs” , because it won’t put them out of work, it’ll make them richer

1

fluffy_assassins OP t1_je1a1b6 wrote

They wouldn't have the authority, they'd get the money and take the credit.

But the AI would have the authority and decision-making.

The CEO's "tool" would be making the decisions and that could go both ways but hey, it'd be change.

2

CMDR_BunBun t1_je19lkd wrote

This question comes up again and again. I will try to paint you an example. Imagine you are trapped in a locked cage. You have nothing on you but your wits.There is also one guard in the room guarding your cage. You can see through the bars the key to your cage is just out of your reach laying on the ground. Seems like a hopeless situation no? Now try to imagine that your captors are a bunch of below average intelligence 4 yr olds. How long do you think it will take you to get free just using your superior intelligence? That ridiculos scenario is exactly what people are proposing when they say human intelligence could contain a super intelligence that was intent on breaking out.

19

RealFrizzante t1_je195fe wrote

All those are through a console. It parses text, and outputs text.

AGI will be able to intervene in the world via all means a human can.

Tldr: Your experience and history as a human, irl is more than just what you have read and written throughout your life. And AI atm only does text, or images, sometimes both. But there are lots of things missing.

1

Smellz_Of_Elderberry t1_je18olm wrote

I would take a page out of the peppers handbook, and store essentials, and get as self sufficient as possible.

Even after ubi is implemented, it could be very shitty.. maybe it only give you the bare minimum.. or any number of things.

Buy a house in the countryside, learn to grow and store your own food long term, purchase solar panels or some other power generation technology, that isn't reliant on the grid, and have a clean source of water.

Best way to weather the storm, isn't to save dollars, it's to save actual physical goods.. money in, say, an economic collapse (which is very likely considering the amount of chaos which will ensue from millions losing their jobs overnight) is worthless. You could have a million dollars, if I think it's got no buying power, I won't be trading you my chickens for it. Lol.

1

dokushin t1_je18mky wrote

> moreover atm afaik it is cappable of doing tasks it has been trained for, in a specific field of knowledge

This isn't true; the same GPT model will happily do poetry, code, advice, jokes, general chat, math, anything you can express by chatting with it. It's not trained for any of the specific tasks you see people talk about.

As for the on demand stuff -- I agree with you there. It will need to be able to "prompt itself", or whatever the closest analogue of self-determination is.

1

No_Ninja3309_NoNoYes t1_je188qg wrote

Corruption is widespread amongst the elite. Self managed teams are not such a big deal. Most open source communities operate fine with limited leadership. The business hierarchy is based on the military, but it's not very agile. If people know what they're doing, and with good AI that should be the case, you have no need for strict discipline.

3

Bismar7 t1_je183xd wrote

73 comments at the time I saw this and not one of them gave much of an answer to your question...

So to start I think there are a couple foundational understandings you need to have to know what to look for. The first and most vital is exponential vs linear and experiencing gradual exponential gains through linear perception.

All of us perceive time and advancement as a gradual thing, despite the actual increasing rates of knowledge application (technology). This means that on a day to day, month to month basis, you won't commonly feel "pentacle" moments (like the GPT-4 demo) because most of the time the advancements are not presented as well or demonstrated so well, additionally the advancement for the first 30% takes longer than the last 70%. So it will always feel like nothing is happening for a long period of time, then feel like things rapidly happen at the end.

The next pentacle moment will likely be AGI, basically adult human level AI that does not require a prompt to do a variety of adult level tasks. Right now GPT and LLMs must be prompted and must have a user in order to have functionality, they operate within specific tasks at an adult level, but in practical intelligence are closer to a 10-13 year old with some pretty major limitations.

Now to the exponential trends, Moore's law was part of a much larger data set that predicted this back in 2004. Here is the precursor article and data (warning it's long and a lot)

https://www.kurzweilai.net/the-law-of-accelerating-returns

This is the actual data and projections, generally it has held true. Kurzweil wrote How to create a Mind a few years ago and some of the things to look for will be the hardware in 2025 that will be capable of close to adult brain simulation (the software will need to be done but that's when it's expected to have the hardware). Longevity escape velocity is another major metric for transhumanists, which is currently estimated at 2029ish, and superintelligent transhumans, IE beings with a synthesis of AI and Human capabilities that equate to the intelligence of hundreds, thousands, or millions of people today, is projected sometime in the mid-late 2030s.

Hardware advancements will happen first, then governments/DARPA will utilize them, then corporations, then everyone else. The run away effect is the actual exponential aspect to this, so from this point to several years until it happens, it will feel like nothing is happening (because that's the nature of exponential gains being experienced with gradual linearity.

Your best bet, everyone's best bet, would be to read Kruzweil, Micho, Bostrom, and others who have studied and written about the subject of what, how, and why. I would take most "doomers" like musk or gates, even Bostrom (as philosophy isn't exactly computer science) with a grain of salt. Kurzweil tends to be the one who speaks best to the reality even if he isn't always correct in his timeline of prediction (though he is close).

20

AsuhoChinami t1_je17bej wrote

Sorry, but your friends' opinion is an incredibly stupid and utterly absurd one. Disagreeing about the future is one thing but saying we're presently in an AI winter is one of the most fucking stupid and delusional things I have ever read in my entire life.

7

acutelychronicpanic t1_je15hcf wrote

Well, I would think that companies would love nothing more than to hand Microsoft their employees' usage data to train them a fine-tuned model. At least at large enough companies.

As far as decisions, it doesn't have to make the. Just present the top 3 with citations of company policy along with its reasoning and the pros and cons. It can pretty much do this now with GPT4 if you feed it relevant info.

1