• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle











  • So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?

    You contradict yourself, the first part of your sentence getting my point correctly, and the second questioning an incorrect understanding of my point.

    What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?

    Such an animal does not exist.

    It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.

    That’s a long way of saying “if free will didn’t exist”, at which point your argument becomes moot, because I would have no influence over what it does to my world view.


  • I had a bunch of sections of your comment that I wanted to quote, let’s see how much I can answer without copy-pasting too much.

    First off, my apologies, I misunderstood your analogy about machine learning not as a comparison towards evolution, but towards how we learn with our developed brains. I concur that the process of evolution is similar, except a bit less targeted (and hence so much slower) than deep learning. The result however, is “cogito ergo sum” - a creature that started self-reflecting and wondering about it’s own consciousness. And this brings me to humans thinking logically: As such a creature, we are able to form logical thoughts, which allow us to understand causality. To give an example of what I mean: Humans (and some animals) did not need the invention of logic or statistics in order to observe moving objects and realize that where something moves, something has moved it - and therefore when they see an inanimate object move, they will eventually suspect the most likely cause for the move in the direction that the object is coming from. Then, when we do not find the cause (someone throwing something) there, we will investigate further (if curious enough) and look for a cause. That’s how curiosity turns into science. But it’s very much targeted, nothing a deep learning system can do. And that’s kind of what I would also expect from something that calls itself “AI”: a systematic analysis / categorization of the input data for the purpose of processing that the system was built for. And for a general AI, also the ability to analyze phenomena to understand their root cause.

    Of course, logic is often not the same as our intuitive thoughts, but we are still able to correct our intuitive assumptions based on outcome, but then understand the actual causal relation (unlike a deep learning system) based on our corrected “model” of whatever we observed. In the end, that’s also how science works: We describe reality with a model, and when we discover a discrepancy, we aim to update the model. But we always have a model.

    With regards to some animals understanding objects / causal relations, I believe - beyond having a concept of an object - defining what I mean by “understanding” is not really helpful, considering that the spectrum of intelligence among animals overlaps with that of humans. Some of the more clever animals clearly have more complex thoughts and you can interact with them in a more meaningful way than some of the humans with less developed brains, be it due to infancy, or a disability or psychological condition.

    How would you describe consciousness, though? I wish you would offer that instead of just saying “nuh uh” and calling me chatGPT :(

    First off, I meant the LLM comment seriously - I am considering already to stop participating in internet debates because LLMs have become so sophisticated that I will no longer be able to know whether I am arguing with a human, or whether some LLM is wasting my precious life time.

    As for how to describe consciousness, that’s a largely philosophical topic and strongly linked to whether or not free will exists (IMO), although theoretically it would be possible to be conscious but not have any actual free will. I can not define the “sense of self” better than philosophers are doing it, because our language does not have the words to even properly structure our thoughts on that. I can however, tell you how I define free will:

    • assuming you could measure every atom, sub-atomic particle, impulse & spin thereof, energy field and whatever else physical properties there are in a human being and it’s environment
    • when that individual moves a limb, you would be able to trace - based on what we know:
      • the movement of the limb back to the muscles contracting
      • movement of the muscles back to electrical signals in some nerves
      • the nerve signals back to some neurons firing in the brain
    • if you trace that chain of “causes” further and further, eventually, if free will exists, it would be impossible to find a measurable cause for some “lowest level trigger event”

    And this lowest level trigger event - by some researchers attributed to quantum decay - might be / could be influenced by our free will, even if - because we have this “brain lag” - the actual decision happened quite some time earlier, and even if for some decisions, they are hardwired (like reflexes, which can also be trained).

    My personal model how I would like consciousness to be: An as-of-yet undiscovered property of matter, that every atom has, but only combined with an organic computer that is complex enough to process and store information would such a property actually exhibit a consciousness.

    In other words: If you find all the subatomic particles (or most of them) that made up a person in history at a given point in time, and reassemble them in the exact same pattern, you would, in effect, re-create that person, including their consciousness at that point in time.

    If you duplicate them from other subatomic particles with the exact same properties (as far as we can measure) - who knows? Because we couldn’t measure nor observe the “consciousness property”, how would we know if that would be equal among all particles that are equal in the properties we can measure. That would be like assuming atoms of a certain element were all the same, because we do not see chemical differences for other isotopes.


  • A consciousness is not an “output” of a human brain. I have to say, I wish large language models didn’t exist, because now for every comment I respond to, I have to consider whether or not a LLM could have written that :(

    In effect, you compare learning on training data: “input -> desired output” with systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.

    Also, you are questioning whether or not logical thinking (as opposed to throwing some “loaded” neuronal dice) is even possible. In that case, you may as well stop posting right now, because if you can’t think logically, there’s no point in you trying to make a logical point.





  • Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

    There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.

    What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.

    Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.

    Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.


  • With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.

    When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.

    Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there’s actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to “smoothen” the results (make output more grammatically correct, filter words, translate languages)

    If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.

    Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.