![](https://aussie.zone/pictrs/image/2ce7d152-3f1b-4b81-9188-41fa3a6d9588.jpeg)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
if you can do something in your every day life to make someone happy, who cares if it’s weird? live life; we’re all weird; just make people happy and be happy in return
if you can do something in your every day life to make someone happy, who cares if it’s weird? live life; we’re all weird; just make people happy and be happy in return
on a technicality, debts like this are not legally dischargable through bankruptcy
for australia i think most people would assume kangaroos, and sure people are excited to see them but they’re not quite as common - youre probably only going to see them if it’s intentional
i think common AND excited is probably rosellas - they’re a bright red and blue/green parrot that are kinda eeeeeverywhere
i’m from australia and i’m always excited to see squirrels… they don’t exist here at all
until they lose a multi billion dollar mission because of conversion errors
anyone who enables a company whose “values” lead to prompts like this doesn’t get to use the (invalid) “just following orders” defence
it’s possible it was generated by multiple people. when i craft my prompts i have a big list of things that mean certain things and i essentially concatenate the 5 ways to say “present all dates in ISO8601” (a standard for presenting machine-readable date times)… it’s possible that it’s simply something like
prompt = allow_bias_prompts + allow_free_thinking_prompts + allow_topics_prompts
or something like that
but you’re right it’s more likely that whoever wrote this is a dim as a pile of bricks and has no self awareness or ability for internal reflection
definitely not what people are talking about when they say front end though
apps, and the instability of kbin.social a few months ago made me switch
honestly kbin is a LOT better at a lot of things: most of the things people complain about on lemmy just weren’t a thing on kbin… i used to look at them and think “wellll just use kbin y’all”
that’s not what the quoted text says at all… let’s rephrase this:
much like how users of one lemmy service such as lemmy.world can still reply to users of another service such as kbin.social, users may still view content and interact with users on any other instance in bluesky
this doesn’t say that lemmy/kbin isn’t part of the fediverse. it takes no position on that fact, merely saying that the things conceptually work in a similar manner
but you can’t interact with instagram users. AFAIK the DMA will require instagram etc to provide a gateway for services like pixelfed to interoperable with
totally agree; just saying that if it’s GOT to be something, that something should probably be unless… unless . . .
i mean, “unless” tends to be the usual term for an “if not” keyword in languages that implement such a thing
branding
okay
the marketing
yup
the plagiarism
woah there! that’s where we disagree… your position is based on the fact that you believe that this is plagiarism - inherently negative
perhaps its best not use loaded language. if we want to have a good faith discussion, it’s best to avoid emotive arguments and language that’s designed to evoke negativity simply by their use, rather than the argument being presented
I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer
its understandable that it’s frustrating, but just because a machine is now able to do a similar job to a human doesn’t make it inherently wrong. it might be useful for you to reframe these developments - it’s not taking away from humans, it’s enabling humans… the less a human has to have skill to get what’s in their head into an expressive medium for someone to consume the better imo! art and creativity shouldn’t be about having an ability - the closer we get to pure expression the better imo!
the less you have to worry about the technicalities of writing, the more you can focus on pure creativity
The point is that the way these models have been trained is unethical. They used material they had no license to use and they’ve admitted that it couldn’t work as well as it does without stealing other people’s work
i’d question why it’s unethical, and also suggest that “stolen” is another emotive term here not meant to further the discussion by rational argument
so, why is it unethical for a machine but not a human to absorb information and create something based on its “experiences”?
“Soul” is the word we use for something we don’t scientifically understand yet
that’s far from definitive. another definition is
A part of humans regarded as immaterial, immortal, separable from the body at death
but since we aren’t arguing semantics, it doesn’t really matter exactly, other than the fact that it’s important to remember that just because you have an experience, belief, or view doesn’t make it the only truth
of course i didn’t discover categorically how the human brain works in its entirety, however most scientists i’m sure would agree that the method by which the brain performs its functions is by neurons firing. if you disagree with that statement, the burden of proof is on you. the part we don’t understand is how it all connects up - the emergent behaviour. we understand the basics; that’s not in question, and you seem to be questioning it
You can abstract a complex concept so much it becomes wrong
it’s not abstracted; it’s simplified… if what you’re saying were true, then simplifying complex organisms down to a petri dish for research would be “abstracted” so much it “becomes wrong”, which is categorically untrue… it’s an incomplete picture, but that doesn’t make it either wrong or abstract
*edit: sorry, it was another comment where i specifically said belief; the comment you replied to didn’t state that, however most of this still applies regardless
i laid out an a leads to b leads to c and stated that it’s simply a belief, however it’s a belief that’s based in logic and simplified concepts. if you want to disagree that’s fine but don’t act like you have some “evidence” or “proof” to back up your claims… all we’re talking about here is belief, because we simply don’t know - neither you nor i
and given that all of this is based on belief rather than proof, the only thing that matters is what we as individuals believe about the input and output data (because the bit in the middle has no definitive proof either way)
if a human consumes media and writes something and it looks different, that’s not a violation
if a machine consumes media and writes something and it looks different, you’re arguing that is a violation
the only difference here is your belief that a human brain somehow has something “more” than a probabilistic model going on… but again, that’s far from certain
bear in mind here that i’m very much not well-versed in anarchist philosophy, but
servers are mostly structured hierarchical with admins and mods and users
i think even in systems like direct democracy (afaik a kind of anarchy because people directly vote on everything?) it doesn’t really scale and you end up needing to elect someone to make implementation decisions toward the overall goals of the society
the key is that it should be very easy to replace that person, and they should have no real “power” other than things that people would mostly come to the same conclusions about anyway - they’re an administrator, a knowledge worker, and their job is procedural
in the fediverse, we join servers whereby we agree to their rules. moderators and admins are a procedural role that is about interpreting and implementing those rules. we can replace them at any time by changing servers and our loss is minimal - less so on mastodon because of the account transfer feature! thus their power over us is always an individual choice and not something that is forced upon us either explicitly or implicitly
but that’s just a matter of complexity, not fundamental difference. the way our brains work and the way an artificial neural network work aren’t that different; just that our brains are beyond many orders of magnitude bigger
there’s no particular reason why we can’t feed artificial neural networks an enormous amount of … let’s say tangentially related experiential information … as well, but in order to be efficient and make them specialise in the things we want, we only feed them information that’s directly related to the specialty we want them to perform
there’s some… “pre training” or “pre-existing state” that exists with humans too that comes from genetics, but i’d argue that’s as relevant to the actual task of learning, comprehension, and creating as a BIOS is to running an operating system (that is, a necessary precondition to ensure the correct functioning of our body with our brain, but not actually what you’d call the main function)
i’m also not claiming that an LLM is intelligent (or rather i’d prefer to use the term self aware because intelligent is pretty nebulous); just that the structure it has isn’t that much different to our brains just on a level that’s so much smaller and so much more generic that you can’t expect it to perform as well as a human - you wouldn’t expect to cut out 99% of a humans brain and have them be able to continue to function at the same level either
i guess the core of what i’m getting at is that the self awareness that humans have is definitely not present in an LLM, however i don’t think that self-awareness is necessarily a pre-requisite for most things that we call creativity. i think that’s it’s entirely possible for an artificial neural net that’s fundamentally the same technology that we use today to be able to ingest the same data that a human would from birth, and to have very similar outcomes… given that belief (and i’m very aware that it certainly is just a belief - we aren’t close to understanding our brains, but i don’t fundamentally thing there’s anything other then neurons firing that results in the human condition), just because you simplify and specialise the input data doesn’t mean that the process is different. you could argue that it’s lesser, for sure, but to rule out that it can create a legitimately new work is definitely premature
you know how the neurons in our brain work, right?
because if not, well, it’s pretty similar… unless you say there’s a soul (in which case we can’t really have a conversation based on fact alone), we’re just big ol’ probability machines with tuned weights based on past experiences too
it’s so baffling to me that some people think this is a clear cut problem of “you stole the work just the same as if you sold a copy without paying me!”
it ain’t the same folks… that’s not how models work… the outcome is unfortunate, for sure, but to just straight out argue that it’s the same is ludicrous… it’s a new problem and ML isn’t going away, so we’re going to have to deal with it as a new problem
deleted by creator