Recent comments in /f/singularity

Tiamatium t1_jdzrfya wrote

If you can't tell the difference, does it matter?

How are we regulating Photoshop today? How are we regulating digital art today? How are we regulating flat out plagiarism today?

Why the fuck do you want to *regulate art" of all things?! Do you think people should need a special license to create art?! What the fuck is up with this gatekeeping?

None of those problems are unique to AI and none are real. AI is just a tool, and while I know that certain artists want to fight it, ban it or get paid for... "Being fucked" by it, that is not new. In fact we have had this exact problem back in mid-1800's with raise in photography. There is a famous rant from 1860's(?) about all the talentless losers (not my words, I am paraphrasing the author of the rant) who can't paint and who can't graduate from university becoming photographers. Painters who used photographs for reference had to hide it. Painters who said art has to adapt were systemically pushed out of art word and exhibits.

So that is literally not a new problem

11

czk_21 t1_jdzr8s1 wrote

> it always predicts the same output probabilities from the same input

it does not, you can adjust it with "temperature"

The temperature determines how greedy the generative model is.

If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.

If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.

1

czk_21 t1_jdzq2zj wrote

my point was that it is interesting how much one could make in those 30 min with AI tools

mod refered to it as low quality and that I should put more thought into new posts, yet here we can see that vast majority of other people consider it also interesting- 92 %, thats sort of proof they should not delete me in the first place

2

Saerain t1_jdzpr2o wrote

Speaking of which, does this use of "sentient" generally mean something more like "sapient"? Been trying to get a handle on the way naysayers are talking.

'Cause sentience is just having senses. All of animalia at the very least is sentient.

Inclined to blame pop sci-fi for this misconception.

1

Saerain t1_jdzp73u wrote

What kind of corporate PR has claimed to have AGI?

As for "near", well yes. It's noticeable we have most human cognitive capabilities in place as narrow AI, separate from one another, and the remaining challenge—at least for the transformer paradigm—is in going sufficiently multi-modal between them.

0

Surur t1_jdzodsu wrote

If something is impossible it may not be worth doing badly.

Maybe instead of testing a student's ability to write essays, we should be testing their ability to have fun and maintain stable mental health.

I mean, we no longer teach kids how to shoe horses or whatever other skill has become redundant with time.

4

Sure_Cicada_4459 t1_jdzoc3j wrote

Not if AI learns to "logically and complete something complex by breaking it down into smaller tasks." and "keep learning new things and adapting to change". That's the point, the fact that you can run fast is irrelevant if the treadmill you are running on is accelerating at an increasing rate. The lessons people should really have learned by now is that every cognitive feat seems replicable, we are just benchmarks and we know what tends to happen to those lately.

2

Griff82 t1_jdzoamx wrote

I'm new to the sub but as a Gen Xer, I've seen great efficiencies develop in my lifetime. the fruit of which did not and will not accrue to the population at large. I expect to watch the same thing happen with AI.

1

Memento_Viveri t1_jdzn6ts wrote

I don't disagree with much of what is stated in the first paper, but think it sets the wrong goal posts. I have no idea what the author means by a three orders of magnitude increase in intelligence. I am already in awe of the smartest humans. Even if you could produce a machine intelligence that was only as smart as the smartest humans, I struggle to fathom the consequences. The machine intelligences can be reproduced ad infinitum. They don't need to sleep and never die. They can communicate between each other in a nearly instantaneous and unbroken manner. They have access to the sum total of all human knowledge and near instantaneous and inerant recall. An army of Einsteins and von Neumann's in constant, rapid communication that never sleeps, never forgets, and never dies.

What are the abilities of such a creation? I don't need an explosion of intelligence of three orders of magnitude. I believe the existence of even one machine with the intelligence level of a highly intelligent human will shake the foundation of society and have implications that are unimaginable. It will be a turning point in human history. Maybe there will be an explosion of godlike intelligence through self improvement, but I don't think this is a necessary condition for society and life to undergo revolutionary and unimaginable changes as a result of machine intelligence.

2

BubblyRecording6223 t1_jdzn4i1 wrote

We really will not know if it happens. Mostly people just repeat information, often inaccurately. For accepted facts, trained machines are more reliable than people. For emotional content people usually give plenty of clues about whether they will be agreeable or not, machines can present totally bizarre responses with no prior warning.

2