Recent comments in /f/singularity

28mmAtF8 t1_jdwfvb8 wrote

I don't think society mocks them nearly as much as they perceive. Their management on the other hand, welcome to gaslight city.

(source, have done delivery jobs, worked with truckers and the scumbag management they're usually shafted with)

Edit: Ew, that brings up an uglier point too. If AI is going to take an even more aggressive role in management that means the knock-on effects for blue collar workers won't be all that pleasant either.

19

czmax t1_jdweqo9 wrote

Is the sentiment / alignment in your plot about the theme of the movie as a whole rather than the role AI plays in the plot?

For example in Wall-e is AI itself ever a problem? I think of it more of a story about sustainability and what people focus on. AI is just a form of character with mostly positive associations.

1

BecauseIBuiltIt t1_jdwe9wf wrote

Honestly fair, I can see why you'd think that, tbh that's one of the reasons why I liked it, though. More so in later seasons, by having a more developed cast, it allowed for some reflection on what it means to be human, and the humanity of ai. I thought it was interesting how the show tackled that, using different characters to reflect different things in each other, and in the two opposing ASIs.

2

Rofel_Wodring t1_jdwdxax wrote

For about eighteen months, tops. Assuming they're one of the lucky ones who weren't undercut by some desperate nursing school dropout willing to work for peanuts.

Actually, trucker might go by faster than even garbageman. I can totally imagine a setup where you have a camera mounted on the car that's also connected to a mountable robot that's attached to and directly manipulates the drive train. Such a setup wouldn't even require the employer to buy new vehicles, just the drive train parasite.

6

roomjosh OP t1_jdwcgv8 wrote

Jane from the Ender's Game book series scores a +6 as a good AI and the story is a +4 for optimism. (about the same placement as Chappie)

Thunderhead from the Scythe series scores a +8 as a good AI and a +7 for optimism of story. (about the same placement as Star Trek: Voyager or Bicentennial Man)

2

czmax t1_jdwcbuq wrote

I was hoping that wearables (like a watch) could do this for me. Or at least force development in that direction.

(Seems to not be panning out… but i still have hope. I’d love to only carry a watch for most of my day. Initially I’d go through screen withdrawal but in the long run I think life would be better).

1

Anjz OP t1_jdwb284 wrote

While it is true that you can download the entire English Wikipedia in a relatively small size, it does not diminish the potential of AI and LLMs. Wikipedia is a static collection of human-generated knowledge, while AI, such as LLMs, can actively synthesize, analyze, and generate new insights based on available information. AI has the potential to connect disparate pieces of knowledge, create context, and provide personalized assistance. Thus, the comparison between Wikipedia and AI should not be based on size alone, but also on the dynamic capabilities and potential applications that AI offers.

For example, can you infer to Wikipedia to tell you how to distil water in a step by step process given only certain tools, or what process to disinfect a wound when there are no medication available? Sure you can find information on it, but a lot of people won't know what to do with it given the information.

That is the difference between Knowledge and Wisdom.

1

extracensorypower t1_jdwati2 wrote

Not that different at all.

chatGPT and other LLMs mimic the part of our cognition that's best described as "learning by rote." Humans do this with years of play. LLMs do this by being trained on text. In each case, neural nets are set up to create rapid jumps along the highest weighted probability path with some randomness and minimal processing thrown in. It's the most computationally cheap method for doing most of what humans do (walking, seeing, talking) and what chatGPT does (talking). Most of what humans consider "conscious intelligence" exists to train the parts of your brain that are automatic (i.e. like chatGPT).

The computationally expensive part - verification of facts via sensory data, rule based processing, accessing and checking curated, accurate data, internal real world rule base modeling with self correction, and most importantly, having a top level neural net layer that coordinates all of these things is what LLMs do not do. Generally we call this "thinking."

The hard parts haven't been done yet, but they will be, and soon. So, right now LLMs are not AGI, but we'll fill in the missing pieces soon enough as the picture of what intelligence is and isn't becomes clearer.

6

vernes1978 t1_jdwa6v2 wrote

Transcendence is about humanity so stuck in certain tropes they'd rather nuke the entire planet then to accept that the man they uploaded into the AI framework is still the man they uploaded into the AI framework.
> We've tried talking him out of fixing problems and he keeps responding in dialogue and keeps fixing problems.
So naturally we had to destroy our entire technological progress back to the steam-age.

3

AsheyDS t1_jdw8ol5 wrote

It's not as simple as emotional vs not-emotional. First, AGI would need to interact with us... The whole point of it is to assist us, so it will have to have an understanding of emotion. And to put it simply, a generalization method relating to emotion would need a frame of reference (or grounding perhaps) and will at least have to understand the dynamics involved. Second, AGI itself can have emotion, but the goal of that is key to how it should be implemented. There's emotional data, which could be used in memory, in processing memory, recall, etc. This would be the minimum necessary, and out of this, it could probably build an associative map anyway. But I think purposefully structuring emotion to coordinate social interactions and everything related to that would help all of that. The problem with an emotional AGI, or at least the thing people are concerned will become a problem, is emotional impulsivity. We don't want it reacting unfavorably, or judgmentally, or with rage, malice, or contempt. And there's also the concern that it will form emotional connections to things that will start to alter its behavior in increasingly unpredictable ways. This is actually a problem for its functioning as well, since we want a well ordered system that is able to predict its own actions. If it becomes unpredictable to itself, that could degrade its performance. However, eliminating emotion altogether would degrade the quality of social interaction and its understanding of humans and humanity, which is a big downside. The best option would be to include emotion on some level, where it is used as a dynamic framework for interacting with and creating emotional data, and utilizing it socially, as well as participating socially and gaining more overall understanding. But these emotions would just be particular dynamics tied to particular data and inputs, etc. As long as they don't affect certain parts of the overall AGI system that govern actionable outputs (especially reflexive action) or anything that would lead to impulsivity, and as long as other safety functions work as expected, then emotion should be a beneficial thing to include.

1