
That’s not what I’m asking. Also, I wasn’t using the term in any way or form.

That’s not what I’m asking. Also, I wasn’t using the term in any way or form.

Here’s another one: if a Latino doesn’t now what “woke” means, does that make them white?

I see, essentially this is vibe testing for race.
I was confused because I went to the census website and it looks like northern African and Middle Eastern people are considered white, something I thought most Americans wouldn’t agree with. Your response makes more sense.

Honest question here from a non US American: who are considered “white”?
Didn’t say it would be easy.
But just to be clear here, why would I do your job? And is it the alternative that you make things up instead of educating yourself?
It looks like you could do a quick search and find an answer yourself, instead of making stuff up.
I can think of a ton of times where increasing welfare does not impact crime statistics and increasing cop spending does and vice versa.
The point is not just to increase it, but to make it enough for people to not resort to commit crimes.
But I’m curious about this take. Scientific literature vastly supports the idea of welfare as a societal stabilizer. What it does not support as much is increased enforcement as a way to curb crime in developed and democratic countries. Which concrete examples can you show to support your point?
Kind of inspired on Surreal Numbers.
You have made your point. You don’t really care about provable facts. Maybe you shouldn’t pretend that you are trying to argue in good faith at all?
Provable assertions about the physical world require measurable observations, not personal beliefs.
I’m panpsychist enough to believe matrix multiplication has qualia
According to this, any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain.
I’m honestly astounded by the amount of misinformation in this comment.
Second derives from sexagesimal measurements of day and night cycles. Metre derives from early Earth measurements. One gram is the weight of one cubic centimeter of water.
I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.
But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs.
This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.
Any non factual philosophical argument is debatable. We could forever discuss if AI models could construct sensations and thought from perceptions, but we would then need to ignore the fact that models don’t, and cannot do, that, simply because there is no way for them to learn from direct experience as a whole, i.e. outside of a particular session, and without being “forcibly coerced”, i.e. they require specific refinement mechanisms to temporary “memorize” external instructions, which in LLM engineering just means to extend their context.
This all doesn’t even take into account that models are, in essence, non deterministic, and given the same input, there’s no guarantee that subsequent outputs will be the same. In other words, today Claude may tell you that summer sunsets make it happy, tomorrow it would say that they make it sad, etc.
Anyway, there’s barely any debate in academia, as in computer scientists, about AI being sentient or giving clues of qualia. Maybe a paper here and there, little more than curiosities. Outside of it? Yeah, sure, barely science fiction, and pretty uninteresting unless we are talking about conspiracy theories or just wild speculation.
AI isn’t conscious. Feedback loops and subsequent responses in LLMs are grounded purely on training datasets, thus any “internal dialogue” emulated by a LLM is just echoes from someone else’s data.


You wouldn’t be able to tell, because “pasture raised” isn’t a formal definition. Hens are mostly fed the same soy and corn they would eat as if they were “free range”, and having more space to roam doesn’t significantly improve taste, only what they eat does.


Most eggs in the US taste the same. I wish there were good and affordable eggs widely available in the US, but that’s not the case.


True, in both cases you cannot tell the difference.


I don’t like GitHub, but this looks like they had someone using an authorized SSH key, but the git client was configured to post some unknown email address. Happens all the time.
Would be funny if they only find out once they have migrated off GitHub though.
We aren’t comparing humans to code.
Except for the bit where LLM behaviors aren’t deterministic, but those of most compilers in most situations are.
And before anyone says that LLVM in version X produced wildly different assembly from version Y, it is not remotely comparable to what LLMs do, not even close.
You both used the same “white people” label, I would think any of the two could give me an answer.