Recent comments in /f/singularity
Yomiel94 t1_jdwew0m wrote
Reply to comment by tupper in Story Compass of AI in Pop Culture by roomjosh
He doesn’t die in the movie, but it’s implied that he will.
czmax t1_jdweqo9 wrote
Reply to comment by roomjosh in Story Compass of AI in Pop Culture by roomjosh
Is the sentiment / alignment in your plot about the theme of the movie as a whole rather than the role AI plays in the plot?
For example in Wall-e is AI itself ever a problem? I think of it more of a story about sustainability and what people focus on. AI is just a form of character with mostly positive associations.
citizentim OP t1_jdwegw6 wrote
Reply to comment by throndir in Question: Could you Train an LLM to have a "Personality?" by citizentim
Ahhh-- I forgot about Character.ai! Right...I guess it COULD be done-- and probably better than that dude did it.
BecauseIBuiltIt t1_jdwe9wf wrote
Reply to comment by clean_inbox in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
Honestly fair, I can see why you'd think that, tbh that's one of the reasons why I liked it, though. More so in later seasons, by having a more developed cast, it allowed for some reflection on what it means to be human, and the humanity of ai. I thought it was interesting how the show tackled that, using different characters to reflect different things in each other, and in the two opposing ASIs.
throndir t1_jdwe9hp wrote
You could add a set of instructions on top of an LLM, provided you enough personality information. Similar to what http://character.ai does.
Rofel_Wodring t1_jdwdxax wrote
Reply to comment by WATER-GOOD-OK-YES in If you went to college, GPT will come for your job first by blueberryman422
For about eighteen months, tops. Assuming they're one of the lucky ones who weren't undercut by some desperate nursing school dropout willing to work for peanuts.
Actually, trucker might go by faster than even garbageman. I can totally imagine a setup where you have a camera mounted on the car that's also connected to a mountable robot that's attached to and directly manipulates the drive train. Such a setup wouldn't even require the employer to buy new vehicles, just the drive train parasite.
Borrowedshorts t1_jdwdkxz wrote
Reply to comment by 94746382926 in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
I don't disagree.
RoamingKnights t1_jdwdk7l wrote
Reply to comment by roomjosh in Story Compass of AI in Pop Culture by roomjosh
That makes perfect sense thanks so much
WATER-GOOD-OK-YES t1_jdwddey wrote
Reply to comment by HeinrichTheWolf_17 in If you went to college, GPT will come for your job first by blueberryman422
Society mocks truckers and garbagemen. In the future, they will be the ones to have the last laugh.
tupper t1_jdwd3ok wrote
Reply to comment by Yomiel94 in Story Compass of AI in Pop Culture by roomjosh
He doesn't die in the movie. Abandoned with very few ways to leave, sure, but not killed.
roomjosh OP t1_jdwcgv8 wrote
Reply to comment by RoamingKnights in Story Compass of AI in Pop Culture by roomjosh
Jane from the Ender's Game book series scores a +6 as a good AI and the story is a +4 for optimism. (about the same placement as Chappie)
Thunderhead from the Scythe series scores a +8 as a good AI and a +7 for optimism of story. (about the same placement as Star Trek: Voyager or Bicentennial Man)
czmax t1_jdwcbuq wrote
Reply to comment by moonpumper in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
I was hoping that wearables (like a watch) could do this for me. Or at least force development in that direction.
(Seems to not be panning out… but i still have hope. I’d love to only carry a watch for most of my day. Initially I’d go through screen withdrawal but in the long run I think life would be better).
exstaticj OP t1_jdwbmuz wrote
Reply to comment by flamegrandma666 in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
It also created an email campaign, video, logo, hero photo, and webpage in very little time.
SkyeandJett t1_jdwb3vi wrote
Reply to comment by agonypants in If you went to college, GPT will come for your job first by blueberryman422
The "easy to repair" part probably isn't as crucial as you think since they'll likely maintain each other.
Anjz OP t1_jdwb284 wrote
Reply to comment by Kinexity in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
While it is true that you can download the entire English Wikipedia in a relatively small size, it does not diminish the potential of AI and LLMs. Wikipedia is a static collection of human-generated knowledge, while AI, such as LLMs, can actively synthesize, analyze, and generate new insights based on available information. AI has the potential to connect disparate pieces of knowledge, create context, and provide personalized assistance. Thus, the comparison between Wikipedia and AI should not be based on size alone, but also on the dynamic capabilities and potential applications that AI offers.
For example, can you infer to Wikipedia to tell you how to distil water in a step by step process given only certain tools, or what process to disinfect a wound when there are no medication available? Sure you can find information on it, but a lot of people won't know what to do with it given the information.
That is the difference between Knowledge and Wisdom.
ShitPostQuokkaRome t1_jdwavcw wrote
Reply to comment by 94746382926 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Soon the PR team will be replaced by an AI machine that generates PR excuses for any situation.
extracensorypower t1_jdwati2 wrote
Reply to LLMs are not that different from us -- A delve into our own conscious process by flexaplext
Not that different at all.
chatGPT and other LLMs mimic the part of our cognition that's best described as "learning by rote." Humans do this with years of play. LLMs do this by being trained on text. In each case, neural nets are set up to create rapid jumps along the highest weighted probability path with some randomness and minimal processing thrown in. It's the most computationally cheap method for doing most of what humans do (walking, seeing, talking) and what chatGPT does (talking). Most of what humans consider "conscious intelligence" exists to train the parts of your brain that are automatic (i.e. like chatGPT).
The computationally expensive part - verification of facts via sensory data, rule based processing, accessing and checking curated, accurate data, internal real world rule base modeling with self correction, and most importantly, having a top level neural net layer that coordinates all of these things is what LLMs do not do. Generally we call this "thinking."
The hard parts haven't been done yet, but they will be, and soon. So, right now LLMs are not AGI, but we'll fill in the missing pieces soon enough as the picture of what intelligence is and isn't becomes clearer.
vernes1978 t1_jdwa6v2 wrote
Reply to comment by arxtrooper in Story Compass of AI in Pop Culture by roomjosh
Transcendence is about humanity so stuck in certain tropes they'd rather nuke the entire planet then to accept that the man they uploaded into the AI framework is still the man they uploaded into the AI framework.
> We've tried talking him out of fixing problems and he keeps responding in dialogue and keeps fixing problems.
So naturally we had to destroy our entire technological progress back to the steam-age.
RoamingKnights t1_jdwa4i8 wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
How would it classify Jane from the Ender’s Game book series or the Thunderhead from the Scythe series?
Kinexity t1_jdw9p5d wrote
Reply to AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
This doesn't have much to do with LLMs or AI. You can download whole English Wikipedia and it uses a fraction of your compute to open and and only weights ~60GB.
Postnificent t1_jdw9hrq wrote
Reply to AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Regress to caveman ways? If the internet goes down are we burning all libraries, books and schools? If the internet went down today, we might just be in a better place than we were yesterday. Period.
Surur t1_jdw97qs wrote
Reply to comment by Equal_Position7219 in What’s missing from the AI conversation by Equal_Position7219
Emotion is just a diffuse version of more instrumental facts.
e.g. fear is risk of destruction, love is recognition of alliance, hate of opposing goals etc.
AsheyDS t1_jdw8ol5 wrote
It's not as simple as emotional vs not-emotional. First, AGI would need to interact with us... The whole point of it is to assist us, so it will have to have an understanding of emotion. And to put it simply, a generalization method relating to emotion would need a frame of reference (or grounding perhaps) and will at least have to understand the dynamics involved. Second, AGI itself can have emotion, but the goal of that is key to how it should be implemented. There's emotional data, which could be used in memory, in processing memory, recall, etc. This would be the minimum necessary, and out of this, it could probably build an associative map anyway. But I think purposefully structuring emotion to coordinate social interactions and everything related to that would help all of that. The problem with an emotional AGI, or at least the thing people are concerned will become a problem, is emotional impulsivity. We don't want it reacting unfavorably, or judgmentally, or with rage, malice, or contempt. And there's also the concern that it will form emotional connections to things that will start to alter its behavior in increasingly unpredictable ways. This is actually a problem for its functioning as well, since we want a well ordered system that is able to predict its own actions. If it becomes unpredictable to itself, that could degrade its performance. However, eliminating emotion altogether would degrade the quality of social interaction and its understanding of humans and humanity, which is a big downside. The best option would be to include emotion on some level, where it is used as a dynamic framework for interacting with and creating emotional data, and utilizing it socially, as well as participating socially and gaining more overall understanding. But these emotions would just be particular dynamics tied to particular data and inputs, etc. As long as they don't affect certain parts of the overall AGI system that govern actionable outputs (especially reflexive action) or anything that would lead to impulsivity, and as long as other safety functions work as expected, then emotion should be a beneficial thing to include.
TheTomatoBoy9 t1_jdw8jk7 wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
In "Her", the protagonist gets cucked by the AI. It's a NTR romance. Change my mind
28mmAtF8 t1_jdwfvb8 wrote
Reply to comment by WATER-GOOD-OK-YES in If you went to college, GPT will come for your job first by blueberryman422
I don't think society mocks them nearly as much as they perceive. Their management on the other hand, welcome to gaslight city.
(source, have done delivery jobs, worked with truckers and the scumbag management they're usually shafted with)
Edit: Ew, that brings up an uglier point too. If AI is going to take an even more aggressive role in management that means the knock-on effects for blue collar workers won't be all that pleasant either.