
MadamFuckalicious@blahaj.zone. It’s a gamble.

MadamFuckalicious@blahaj.zone. It’s a gamble.


If I snore my wife pokes me and asks to roll over, then she goes back to sleep and so do I. She doesn’t usually snore, but I generslly fall asleep faster anyway.

Hitler loved to play with kittens! He carried them around in his shirt pockets to keep them cozy and warm.


So, If speech production (or sentence construction) was what you meant by saying “it’s the same for human responses,” then yes, we agree. Both are probabilistic word generators and likely work in similar ways.
This is what I meant.
However, if you meant the entirety of human response—as in from hearing/reading a comment, thinking about it, responding—is the same as current LLMs generating text. I strongly disagree.
That would be crazy, I’m glad you would disagree with that.
As for me asserting human response is chemistry is more like asserting AI is electromagnetism, there are many reasons why, but the simplest illustration would be this:
I think it is entirely possible to build an inorganic but still functional human brain on electrical hardware. (In other words, full blown transhumanism or at the very least, “AGI”) If human response is chemistry in organics, it would be electromagnetism in silicon.
This is moving into a funny gray area, but what you are talking about is, I think, only possible if you take a route like the one covered in Jeff Hawkins’ “A thousand brains”. It’s not the most fun read if you’re not into neuroscience, but the second half is pretty relevant regardless.


I took some time to reflect on this post because I don’t prefer to be the moron, hopefully that didn’t slow you down because your response is a good response.
To start I think that hyping up the thinking people do as a counter argument is exactly the same thing I was doing by describing recent advancements in AI and I’m not sure it moves us foward, so I will count that as a wash.
The next thing I will add here is an article I read… idk awhile ago but it shows the explainability of AI decision making in a more advanced way than like Shapley parameters for example: https://transformer-circuits.pub/2025/attribution-graphs/methods.html
And you basically hit it for me: my biggest gripe is that we are not doing one word at a time predictors any more. So functionally yes it’s true, but practically we are doing next word prediction for LLMs in the same way you’re doing next word prediction when you talk: you know where you’re going and we build sentences one word at a time.
I also don’t think it’s fair to say that “oh fundamentally LLM stuff is just electromagnetism” because while that is literally true, I think it’s still a better analogy to say statistics vs chemistry. The comments are going to make you feel a certain way, you’ll make a response based on that feeling and what you know, and we will continue. I think that’s pretty functionally the same as “here’s an open prompt, we need to answer it, let’s take a statistical approach to the specific wording we will use”.
Now that we are including end goals, reflection, and rule sets, I don’t think it’s fair to say it’s just one word at a time prediction anymore because training and optimization are happening at the prompt and response scale, not the word scale anymore.
I have not read anything recently that is purporting to do something useful by editing NN weights directly, but that doesn’t mean it isn’t happening. I think that is actually just a side discussion in the end.
In the end, if there is evidence of planning and then adding words to meet this plan, even if it goes one word at a time, I think we are escaping “statistical word generator”. And if you say that we didn’t escape that threshold, I would suggest that when we talk we are doing the same thing: we understand grammar at a pretty fundamental level, but when it comes to vocab there are only a handful of words that make sense and we are making that decision in a way that is not altogether different from LLM sentence generation. I think the only sane way to disprove that is if you want to go looking for substantially offbeat phrasing or expression that would be outside the bounds of “statistical regularity” that LLMs are using.
Unless you want to show a history of totally whackadoodle phrasing for ideas in lemmy, LLMs are no more staticstical word predictors than you are.


I’m not an expert in Apple products, but I am fairly certain those only work if an iPhone is in range so it would also depend on OPs phone or the phones of anyone around.


AI/llm is just a statistical word predictor.
You could perhaps make the argument that it’s a statistical token predictor, but that is about as useful as boiling down various weather services as “just statistics” or the economy as “just statistics”.
Making a language model that speaks a language is not that hard, but the world of science underlying how this is done is anything but simple. Saying it’s just statistics is ridiculously reductive like saying your response to this comment is just chemistry. Context driven tokenization, byte level byte pair encoding, RoBERTa, fine tuning methods, direct preference optimization, dataset curation and management, and curriculum learning for targeted performance and memory are things that are being developed and refined very fast (like weekly or monthly breakthroughs sometimes) and with pretty staggering performance increases. It still is not for everyone because power tools injure, but instead of saying “just a statistics engine” say what you really mean “I don’t understand it, but I believe XXX is a bad use case for LLMs.”
To a lesser extent that any company is “programming AI”. Not in the way that you mean it; curriculum learning, guard rails, and fine running are all extremely indirect. Nobody had their hands specifically in a model’s parameter space directly.
I was just yanking the other guy’s chain.
6? You were a year behind already! ;)
Sounds about right.
Improvisational jazz: “bro it’s been 84 measures of discordant shit, just resolve the damn thing already and play the root of the chord!”
edit:
Roses are red, violets are blue, some people have autism doodliotoodat mmbat goodatgooctmapanda macamapandiddle patmaksboodliodoo dimpaoacmapaway choopamadampakampa shittlybittly gampapawakombucha shoodleeooowasasaampandaweeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee dittlyittlybimbopadooooo oooooo oooooOoooooooOooOOoOOooooo chamimbamamdapaweeeee
Smooth jazz is better 🖕


I have a feeling one authored this thread.


Except the last 2. The last 2 go in the milk directly and fall in your mouth a little soggy when you finish drinking it.
Hey look it’s the hegelian dialectic, except the anthesis is a little weak and has some non negligible overlap with students. But then that’s why we iterate.
Jasper looks so horny in this gif I can’t take it.


Yes. The trick is not to be a cheap ass, but also not to get hustled. So I guess you have to know something about the work being done?
Edit:
Also, I keep the extra materials. I paid for them. They are mine.


See also that YOUR temperature drops overnight. Your body temp goes down about 2 degrees overnight starting at the same time your body starts releasing melatonin before bed. This means as you start to wake up you also start to warm up, so you’re partially at fault here too!
If you want a space to talk, that’s cool. If you are publishing works, sorry, that should be under through review from every angle, regardless of gender, class, nationality, or any other form of segmentation.
You describe it and I can hear my aunt (in law) saying it that way. I believe it.


Try a different medium or visit a doctor maybe?
That sounds like 20 minutes or less of reading but the Internet is telling me that for slower readers that’s up to an hour for 8k words. If it takes you an hour, maybe don’t go for quite as long in a sitting and work up to it a little, if it’s 15 to 25 mins of reading I think I would switch to something more paper-like / talk to an eye doctor about it.
Keep fighting the good fight!