Recent comments in /f/singularity
brycedriesenga t1_jearwhs wrote
Reply to comment by LiveComfortable3228 in Where do you place yourself on the curve? by Many_Consequence_337
> terminators will not kill humanity by 2024 either
Well, what's the point of all this then‽
brycedriesenga t1_jeart9s wrote
Reply to comment by maskedpaki in Where do you place yourself on the curve? by Many_Consequence_337
Toasted and served with a bag of chips?
nobodyisonething OP t1_jearpl9 wrote
Reply to comment by byttle in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
How much does Reddit pay us for opinions today? Why will companies start doing that tomorrow?
theotherquantumjim t1_jearid2 wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
That is one school of thought certainly. There are plenty in academia who argue that maths is fundamental
[deleted] t1_jearfs3 wrote
ertgbnm t1_jear5fe wrote
There's lot of research into how quantum computers could be used to help train neural networks with lower compute requirements. But I put a very low probability of quantum computing scaling fast enough to be useful at the sizes of models were working at in the near future.
Saerain t1_jeaqxmu wrote
Reply to comment by Cypher10110 in This image felt a bit more meaningful given current events re:pausing AI. by Cypher10110
Hm. I haven't seen "the west needs to win" so much as should be free to participate. Especially if you fear the existential risk of unaligned AGI, it seems counterproductive to intentionally push its development away from your sphere of values.
confused_vanilla t1_jeaqlwu wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
China, Russia, and probably others: "Suuuure we'll stop if you will" *continue to develop better AI models in secret
It's inevitable. There's no stopping it now, and to try is to allow others to figure it out first.
goatsdontlie t1_jeaptg7 wrote
Reply to comment by Lokhvir in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
It does recognize it... Maybe it's a bit finicky. I'm Brazilian and it worked. I put "São Paulo, XXXXX-XXX"
Specific-Chicken5419 t1_jeapt3x wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Think they will be hiring noobs? I'd be interested.
3z3ki3l t1_jeapp2j wrote
Reply to comment by genericrich in GPT characters in games by YearZero
Well, they solved that with ChatGPT already, so no issue there. Especially without player dialogue input.
acutelychronicpanic t1_jeap8m1 wrote
Reply to comment by Thatingles in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Yeah, I greatly respect him too. I've been exposed to his ideas for years.
Its not that it wouldn't work if we did what he suggests. Its that we can't do it. It's just too easy to replicate for any group with rather modest resources. There are individual buildings that were more expensive than SOTA LLM models.
The toothpaste is out if the tube with transformers and large language models. I don't think most people, even most researchers had any idea that it would be this "easy" to make this much progress in AI. That's why everyone's guesses were 2050+. I've heard eople with PhDs confidently say "not in this century" within the last 5-10 years.
Heck, Ray Kurzweil looks like a conservative or at least median in this current timeline (I never thought I would type that out).
WonderFactory t1_jeap8lf wrote
Reply to comment by genericrich in GPT characters in games by YearZero
I think it can work, I'm trying to get it working in a game I'm developing at the moment. You have to have a mix of randomness and structured story telling. I literally have to say to ChatGPT, the user is saying xyz, reply to the user but try to work this plot point into your reply.
WonderFactory t1_jeaotp1 wrote
Reply to GPT characters in games by YearZero
I'm actually adding ChatGPT NPCs to the Unreal Engine 5 game I'm developing at the moment, it's a rogue like set in a post singularity world, gameplay is similar to Hades so there's plenty of dialogue in the game. at the minute it's difficult to get a model to run on the local PC so I'm using OpenAI's API. There are challenges like latency while you're waiting for the API to return, it's also quite expensive so releasing a free demo of the game is out of the question. It could potentially cost several dollars per user in API fees over the life of the game so that will of course limit your pricing flexibility, you can only reduce the price so much in steam sales etc. I'm hoping though that inference costs will come down by the time the game is finished.
I haven't posted any footage with the GPT dialogue added to the game but I might post it here in a couple of weeks.
genericrich t1_jeaoj5c wrote
Reply to comment by 3z3ki3l in GPT characters in games by YearZero
OK, if you can verify that it won't become racist or introduce another problem that a AAA game studio won't want in their game, then go for it.
byttle t1_jeaofw7 wrote
AI will start paying us for our opinions most likely. It’s going to want the truth with a capital T. Tested, trusted, trillions of times. How do you get truth nowadays?
3z3ki3l t1_jeanw6e wrote
Reply to comment by genericrich in GPT characters in games by YearZero
Eh. Skyrim already has generated side-quests. It creates characters for you to go kill, and items for you to steal. I don’t think a little bit of dialogue, whether it fits perfectly or not, would break that much.
Saerain t1_jeand2j wrote
Feels like the slope of enlightenment to me because my hype was peaking in 2013 and is just coming out of a trough.
ReignOfKaos t1_jean8y9 wrote
Reply to comment by apinanaivot in Where do you place yourself on the curve? by Many_Consequence_337
Yes. Exponential functions look like s-curves when applied to reality.
sweatierorc t1_jeamh38 wrote
Reply to comment by TruckNuts_But4YrBody in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
It will probably be funded by billionaire philanthropists and large corporations. I could see Nvidia using this as a way to promote their GP. Musk or Zuck could use it for PR. Even Gates may drop a buck, just to act like he actually cares.
el_chaquiste t1_jeameqc wrote
Reply to comment by FaceDeer in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
> for everyone to use
This is the part I don't buy. There will be queues and some will be more equal than others.
Andriyo t1_jeam834 wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
There is nothing fundamental behind 1+1=2. It's just the language that we use to describe reality as we observe it as humans. And even beyond that, it's cultural: some tribes have "1", "2", "3", "many" math and to them it is as "fundamental" as Integer number system to us. The particular algebra of 1+1=2 was invented by humans (and some other species) because we evolutionary optimized to work with discrete objects to detect threats and such.
I know Plato believed in the existence of numbers or "Ideas" in a realm that transcended the physical world but it's not verifiable so it's just that - a belief.
So children just learn the language of numbers and arithmetic as any other language by training on examples - statistically. There might be some innate training that happened on DNA level so we're predisposition to learn about integers easier but it doesn't make "1+1=2" as something to discover that exists on its own like, say, gravity or fire.
[deleted] t1_jeam4uq wrote
[deleted]
el_chaquiste t1_jeam1w6 wrote
Reply to comment by ReasonablyBadass in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Only the priesthood of some ML school of thought will get access, as it's usual with such public organizations, where some preemiment members of some specific clergy rule.
Private companies and hackers with better algorithms will run circles around them, if not threatened with bombing their datacenters or jailed by owning forbidden GPUs, that is.
qrayons t1_jeat09f wrote
Reply to comment by [deleted] in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I've heard that before, though I wonder how much of that is just semantics/miscommunication. Like people are saying they can't visualize anything because it's not visualized as clearly and intensely as an actual object in front of them.