Recent comments in /f/MachineLearning
belikeron t1_jdz9h8x wrote
Reply to comment by cheddacheese148 in [D] FOMO on the rapid pace of LLMs by 00001746
I prefer my version where they match their speed, knock on the window like Matthew McConaughey and say, "You losers getting in? We're going colonizing!"
nemorocksharder t1_jdz8kt5 wrote
Reply to comment by light24bulbs in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
What you're describing is exactly what I have been looking to do too, and am really surprised I'm not hearing more about it. Have you found any useful approaches to essentially adding to the LLM's Corpus with target material/text? or anyone else trying to do this?
friuns t1_jdz8ef6 wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
I feel you on the FOMO with LLMs. It's like we're all aboard a speeding train, right? Don't stress too much, though! Remember, innovation is a collective journey, and there's always room for exploration, even with limited resources. Keep an eye on new techniques, distillation, and distributed compute - the ML world is full of opportunities to hop in and make a difference! Let's embrace the excitement and keep learning together!
probablynotmine t1_jdz84jf wrote
Sounds like a conspiracy theorist answer: “this is the scientific proof/source, and it might or might not exists”
Cherubin0 t1_jdz7s1i wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
The do called botter lesson just shows that the research is still at the start. Of course line fitting gets better the more data points you have. We are still just line fitting.
spiritus_dei t1_jdz7rmz wrote
Reply to comment by MootVerick in [D] FOMO on the rapid pace of LLMs by 00001746
I think this is the best formulation of the question I've seen, "Can you imagine any job that a really bright human could do that a superintelligent synthetic AI couldn't do better?"
Everyone loves to default to the horse and buggy example and they always ignore the horse. Are programmers and researchers the blacksmiths or are they the horses?
It's at least 50/50 that we're all the horses. That doesn't mean that horses have no value, but we don't see horses doing the work they once did in every major city prior to their displacement by automobiles.
We also hear this familiar tome, "AI will create all of these news jobs that none of us can imagine." Really? That superintelligent AIs won't be able to do? It reminds me of a mixed metaphor. These two ideas are just not compatible.
Either they hit a brick wall with scaling or we all will be dealing with a new paradigm where we remain humans (horses) or accept the reality that to participate in the new world you become a cyborg. I don't know if it's possible, but may be the only path to "keep up" and it's not a guarantee since we'd have to convert biological matter to silicon.
And who wants to give up their humanity to basically become an AI? My guess is the number of people will shock me if that ever becomes a possibility.
I'm fine with retirement and remaining an obsolete human doing work that isn't required for the fun of it. I don't play tennis because I am going to play at Wimbledon or even beat anyone good - I play it because I enjoy it. I think that will be the barometer if there isn't a hard limit on scaling.
This has been foretold decades ago by Hans Moravec and others. I didn't think it was possible in my lifetime until ChatGPT. I'm still processing it.
braindead_in t1_jdz7okb wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
I'm getting geeky instead of being anxious. Computera are never gonna be the same again. LLMs are giving me the same vibes I got after writing my first program.
UBI is coming anyways.
KingsmanVince t1_jdz7k7x wrote
Reply to comment by liyanjia92 in [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
https://github.com/nichtdax/awesome-totally-open-chatgpt#ethanyanjialiminchatgpt
And your work is listed as other alternative for ChatGPT
ghostfaceschiller t1_jdz6vzn wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
I think this was shown awhile ago (like a week ago, which just feels like ten years)
While I do think this is important for several reasons, personally I don't see it as all that impactful for what I consider AI capable of going forward.
That's bc pretty much all my assumptions for the next couple years are based on the idea of systems that can loop and reflect on their own actions, re-edit code based on error messages, etc. Which they are very good at
antonivs t1_jdz6vai wrote
Reply to comment by abnormal_human in [D] FOMO on the rapid pace of LLMs by 00001746
Exactly what I was getting at, yes.
spiritus_dei t1_jdz6pml wrote
Reply to comment by ghostfaceschiller in [D] FOMO on the rapid pace of LLMs by 00001746
If he's the standard of "success" then based on Twitter that's something you may want to reconsider. Jürgen Schmidhuber comes in a close second.
hardmaru t1_jdz6md2 wrote
Reply to comment by Balance- in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Thanks!
Seankala t1_jdz6kty wrote
Reply to comment by wazis in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Yeah I read through the whole thing and it's not surprising. Train-test contamination has been a problem for a while now.
fiftyfourseventeen t1_jdz6eu7 wrote
Reply to comment by ---AI--- in [D] FOMO on the rapid pace of LLMs by 00001746
The only way you are training your own GPT 3 level model for 600 is by spending 300 bucks on a gun, 300 bucks renting a u haul and heisting a datacenter
Edit: maybe cheap out on the gun and truck, can't forget about electricity costs of your newly acquired H100s
Balance- OP t1_jdz6brh wrote
Reply to comment by hardmaru in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Whoops, done!
Seankala t1_jdz64gw wrote
Reply to comment by hardmaru in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Thanks!
Deep-Station-1746 t1_jdz5z9w wrote
Reply to [D] Is French the most widely used language in ML circles after English? If not, what are some useful (natural) languages in the field of machine learning? by Subject_Ad_9680
Pythonese is quite useful, from what I hear. Especially the Torchese dialect.
hardmaru t1_jdz5yb8 wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Hi /u/Balance-
Can you fix the formatting of the URL in your post?
The URL should be https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks
Headz0r t1_jdz5x1q wrote
Reply to comment by eamonious in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
How do you define difficulty of a word?
landongarrison t1_jdz5ao3 wrote
Reply to comment by nxqv in [D] FOMO on the rapid pace of LLMs by 00001746
This was an incredibly well thought out comment. Should be at the top.
Seankala t1_jdz53mn wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
It'd be nice to see the qualifications of the authors.
wazis t1_jdz4v8g wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
If it is true (too lazy to check), it is not surprizing. If it is not than it is also not surprising
tripple13 t1_jdz4bch wrote
Reply to comment by ZestyData in [P] 🎉 Announcing Auto-Analyst: An open-source AI tool for data analytics! 🎉 by aadityaubhat
+1
Simcurious t1_jdzatox wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
That's not correct, the benchmark they used only contained codeforce problems from after 2021.
From Horace's tweets: >Considering the codeforces results in the paper (very poor!), they might have only evaluated it on recent problems.