Recent comments in /f/singularity
Esquyvren t1_je1alq2 wrote
Reply to comment by Specific-Chicken5419 in GPT's Language Interpretation will make traveling so much better by BlackstockTy476
I remember it was a game changer for traveling back in 2013-14. Especially eating out at places without English menus
bullettrain1 t1_je1aiem wrote
Reply to comment by fluffy_assassins in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
My point is the livelihood of CEOs are not threatened by AI, opposed to everyone else. To your point - they will use it as a tool. That’s my issue with people saying “oh it will replace CEOs” , because it won’t put them out of work, it’ll make them richer
Villad_rock t1_je1aed1 wrote
Reply to comment by hyphnos13 in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
Those industries will also be more productive and I mean with current ai.
fluffy_assassins OP t1_je1a1b6 wrote
Reply to comment by bullettrain1 in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
They wouldn't have the authority, they'd get the money and take the credit.
But the AI would have the authority and decision-making.
The CEO's "tool" would be making the decisions and that could go both ways but hey, it'd be change.
CMDR_BunBun t1_je19lkd wrote
Reply to comment by Crackleflame35 in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
This question comes up again and again. I will try to paint you an example. Imagine you are trapped in a locked cage. You have nothing on you but your wits.There is also one guard in the room guarding your cage. You can see through the bars the key to your cage is just out of your reach laying on the ground. Seems like a hopeless situation no? Now try to imagine that your captors are a bunch of below average intelligence 4 yr olds. How long do you think it will take you to get free just using your superior intelligence? That ridiculos scenario is exactly what people are proposing when they say human intelligence could contain a super intelligence that was intent on breaking out.
Denpol88 t1_je19dn1 wrote
Reply to Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
It's name Gpt-4 not Chat-Gpt 4
homezlice t1_je19bdz wrote
Reply to comment by czk_21 in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
A private company is going to be different I am discussing US law here. But no, it can't be changed, you need a human to be responsible at the top of an organization legally, otherwise artifical entities could spin up endless fake companies.
bullettrain1 t1_je1990c wrote
Reply to comment by fluffy_assassins in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
That implies all CEOs own the company. And it also assumes they wouldn’t keep the title and add it as a new executive position with all authority and take all the credit.
Practical-Bar8291 t1_je1969l wrote
Reply to Will AGI Need More Than Just Human Common Sense To Take Most White Collar Jobs? by NazmanJT
I'd think you could cover the group think process with several AIs. Humans make better decisions when in a team of colleagues (for the most part). Collaboration is part of common sense, we can read the room and make a better decision. I don't see why it would be any different from a group of AIs doing the same.
RealFrizzante t1_je195fe wrote
Reply to comment by dokushin in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
All those are through a console. It parses text, and outputs text.
AGI will be able to intervene in the world via all means a human can.
Tldr: Your experience and history as a human, irl is more than just what you have read and written throughout your life. And AI atm only does text, or images, sometimes both. But there are lots of things missing.
[deleted] t1_je18q43 wrote
[deleted]
Smellz_Of_Elderberry t1_je18olm wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
I would take a page out of the peppers handbook, and store essentials, and get as self sufficient as possible.
Even after ubi is implemented, it could be very shitty.. maybe it only give you the bare minimum.. or any number of things.
Buy a house in the countryside, learn to grow and store your own food long term, purchase solar panels or some other power generation technology, that isn't reliant on the grid, and have a clean source of water.
Best way to weather the storm, isn't to save dollars, it's to save actual physical goods.. money in, say, an economic collapse (which is very likely considering the amount of chaos which will ensue from millions losing their jobs overnight) is worthless. You could have a million dollars, if I think it's got no buying power, I won't be trading you my chickens for it. Lol.
dokushin t1_je18mky wrote
Reply to comment by RealFrizzante in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
> moreover atm afaik it is cappable of doing tasks it has been trained for, in a specific field of knowledge
This isn't true; the same GPT model will happily do poetry, code, advice, jokes, general chat, math, anything you can express by chatting with it. It's not trained for any of the specific tasks you see people talk about.
As for the on demand stuff -- I agree with you there. It will need to be able to "prompt itself", or whatever the closest analogue of self-determination is.
[deleted] t1_je18fgk wrote
No_Ninja3309_NoNoYes t1_je188qg wrote
Reply to Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
Corruption is widespread amongst the elite. Self managed teams are not such a big deal. Most open source communities operate fine with limited leadership. The business hierarchy is based on the military, but it's not very agile. If people know what they're doing, and with good AI that should be the case, you have no need for strict discipline.
Bismar7 t1_je183xd wrote
Reply to Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
73 comments at the time I saw this and not one of them gave much of an answer to your question...
So to start I think there are a couple foundational understandings you need to have to know what to look for. The first and most vital is exponential vs linear and experiencing gradual exponential gains through linear perception.
All of us perceive time and advancement as a gradual thing, despite the actual increasing rates of knowledge application (technology). This means that on a day to day, month to month basis, you won't commonly feel "pentacle" moments (like the GPT-4 demo) because most of the time the advancements are not presented as well or demonstrated so well, additionally the advancement for the first 30% takes longer than the last 70%. So it will always feel like nothing is happening for a long period of time, then feel like things rapidly happen at the end.
The next pentacle moment will likely be AGI, basically adult human level AI that does not require a prompt to do a variety of adult level tasks. Right now GPT and LLMs must be prompted and must have a user in order to have functionality, they operate within specific tasks at an adult level, but in practical intelligence are closer to a 10-13 year old with some pretty major limitations.
Now to the exponential trends, Moore's law was part of a much larger data set that predicted this back in 2004. Here is the precursor article and data (warning it's long and a lot)
https://www.kurzweilai.net/the-law-of-accelerating-returns
This is the actual data and projections, generally it has held true. Kurzweil wrote How to create a Mind a few years ago and some of the things to look for will be the hardware in 2025 that will be capable of close to adult brain simulation (the software will need to be done but that's when it's expected to have the hardware). Longevity escape velocity is another major metric for transhumanists, which is currently estimated at 2029ish, and superintelligent transhumans, IE beings with a synthesis of AI and Human capabilities that equate to the intelligence of hundreds, thousands, or millions of people today, is projected sometime in the mid-late 2030s.
Hardware advancements will happen first, then governments/DARPA will utilize them, then corporations, then everyone else. The run away effect is the actual exponential aspect to this, so from this point to several years until it happens, it will feel like nothing is happening (because that's the nature of exponential gains being experienced with gradual linearity.
Your best bet, everyone's best bet, would be to read Kruzweil, Micho, Bostrom, and others who have studied and written about the subject of what, how, and why. I would take most "doomers" like musk or gates, even Bostrom (as philosophy isn't exactly computer science) with a grain of salt. Kurzweil tends to be the one who speaks best to the reality even if he isn't always correct in his timeline of prediction (though he is close).
[deleted] t1_je17wka wrote
AsuhoChinami t1_je17bej wrote
Reply to comment by galactic-arachnid in Which communities have you found where people are both smart about what AI is and isn't currently capable of, but where everyone in there is convinced we'll have AI soon that's smarter than 95% of humans at all computer based tasks within a few years? by TikkunCreation
Sorry, but your friends' opinion is an incredibly stupid and utterly absurd one. Disagreeing about the future is one thing but saying we're presently in an AI winter is one of the most fucking stupid and delusional things I have ever read in my entire life.
TFenrir t1_je16wdr wrote
Enjoy life, you don't know what the world is going to look like in 10 years, so pursue fulfillment by following those dreams you've been putting off, as soon as you can.
Graucus t1_je1641n wrote
Reply to comment by Zetus in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
I think it's possible it'll never be more than that and still be the most powerful tool ever created.
Zetus t1_je160sb wrote
Reply to comment by Bithom in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
I think I agree with that, but also we will have qualitatively new dynamics regarding the kinds of work that can be done, that haven't even been imagined yet.
Sigma_Atheist t1_je15y65 wrote
Reply to comment by Arowx in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
I've been in the trough of disillusionment for a while now for machine learning and neural networks.
SgathTriallair t1_je15pfj wrote
Reply to comment by Arowx in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
For me, I have very few use cases for ChatGPT. This is because it is siloed and so I'm unable to truly automate anything. There is a clear path to doing it and so I'm not disillusioned yet, just eager for the next steps.
acutelychronicpanic t1_je15hcf wrote
Reply to comment by NazmanJT in Will AGI Need More Than Just Human Common Sense To Take Most White Collar Jobs? by NazmanJT
Well, I would think that companies would love nothing more than to hand Microsoft their employees' usage data to train them a fine-tuned model. At least at large enough companies.
As far as decisions, it doesn't have to make the. Just present the top 3 with citations of company policy along with its reasoning and the pros and cons. It can pretty much do this now with GPT4 if you feed it relevant info.
No_Ninja3309_NoNoYes t1_je1at2h wrote
Reply to How can we empower humans through A.I. while minimizing job displacement? Ideas? by sweetpapatech
Education
Prompt engineers, AI babysitters
All of them
We can but they won't take it seriously.
I only know about Andrew Yang. Life without a job is not something that most people are actively preparing for. Society can handle short lived famine, but I'm more worried about riots and civil unrest. Andrew Yang thinks that UBI is possible. We'll have to wait and see...