![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://beehaw.org/pictrs/image/c0e83ceb-b7e5-41b4-9b76-bfd152dd8d00.png)
I think he was still on the board after he closed his account, him leaving the board might be much more recent
I think he was still on the board after he closed his account, him leaving the board might be much more recent
I do love me a good video game video essay, but I think that a more traditional journalistic format has a lot of strengths when it comes to covering small games. It’s probably true that youtube has replaced a lot of traditional journalism but I think that this is overall bad for the video game echo system.
One thing that I think is missing from the equation is good video games journalism that covers indie games. Video game journalism has never been doing amazing but it’s practically dead now.
Tying discovery to the same platform that you consume things on is really bad, because it always gives that distributor way to much power. Similar story with spotify, but journalism about underground music is at least in a slightly better place.
even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.
Do you mean that small changes radically change the phrasing of answers, but that it has largely the same “knowledge” of the world? Or do you mean that small changes also radically alter what a llm thinks is true or not? If you think the former is true, then these models should still be the same in regards to what they think is true or not, and if you don’t then you think that llms perception of the world is basically arbitrary and in that case we shouldn’t trust them to tell us what’s true at all.
Well if we have a reliable oracle available for a type of questions (i.e. Wolfram Alpha) why use an llm at all instead of just asking the oracle directly
The problem isn’t just that llms can’t say “I don’t know”, it’s also that they don’t know if they know something or not. Confidence intervals can help prevent some low-hanging fruit hallucinations but you can’t eliminate hallucinations entirely since they will also hallucinate about how correct they are about a given topic.
Bluesky has the most twitter like user base of all the twitter clones that I’ve tried, and it’s up to you if that’s a good or bad thing. It’s not all segments of twitter though, there isn’t really any of right wing twitter or crypto twitter for example (a lot of furries on the other hand) which is quite nice actually. It isn’t really active or important enough to get a lot of the big drama or main character moments and there aren’t really any celebs, journalists and politicians posting there. So it’s a bit like twitter without many of the lows but also many of the highs.
Dorsey got bullied off bluesky by its userbase so there’s that at least
The article mentions AI. 16gigs feels far too little to run a LLM of respectable size so I wonder what exactly this means? Feels like no one is gonna be happy about a 16gig LLM (high RAM usage and bad AI features)
Many “AI generated” images are actually very close to individual images from their training data so it’s debatable how much difference there is between looking at a generated image and just looking at an image from its training data in some cases at least.
Do you think that you can’t take a critical view of “technological advancement” without understanding it? I understand if you think the title is too clickbaity or something but it sounds a bit like you’re dismissing criticism about AI out of hand.
Every human work isn’t mechanically derivative. The entire point of the article is that the way LLMs learn and create derivative text isn’t equivalent to the way humans do the same thing.
I agree in regards to image generation, but chat bots giving advice which risk fueling eating disorders is a problem
Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend.
Someone with an eating disorder might ask a language model about weight loss advice using pro-anorexia language, and it would be good if the chatbot didn’t respond in a way that might risk fueling that eating disorder. Language models already have safeguards against e.g. hate speech, it would in my opinion be a good idea to add safeguards related to eating disorders as well.
Of course, this isn’t a solution to eating disorders, you can probably still find plenty of harmful advice on the internet in various ways. Reducing the ways that people can reinforce their eating disorders is still a beneficial thing to do.
I agree that the image generation stuff is a bit tenuous but chatbots giving advice by way of dangerous weight loss programs, drugs that cause vomiting and hiding how little you eat from family and friends is an actual problem.
You’re right that this propably doesn’t make much of a difference to the average windows user, but this is a step towards normalizing data collection in broader areas of computing and I think that it’s good to keep up to date with stuff like this and where appropriate call it out (although it propably doesn’t make a huge difference to complain about it on lemmy to be honest)
think I’m gonna give this a try but the style of writing in the blog post isn’t making this easy
👩🚀 Spacebar
Not the one on your keyboard, silly 😜
shudders
Is there any way mastodon stands out from other self hosted websites? Would the CSAM material be harder to distribute or easier to prosecute if they ran, say, a self-hosted bulletin board for it instead?
I think most people don’t go to a platform because of how it is implemented but rather what content and what communities already exist there.
People on the fediverse now are using it not because of the content already here but more because of the promise of a platform designed in a different way that will ultimately enable a better internet experience. I think part of the reason why it’s mostly techy people is that the sales pitch is complicated enough that mostly techy people will be able to appreciate it. Not to say that non-techy people are too stupid to get it, it’s just that it requires a kind of abstract thinking that techy people are more used to.
It feels like lemmy seems to have a sense of nostalgia for old reddit in some ways, so I imagine that a lot of people on here where also on reddit maybe 5-15 years ago, which means that you are probably going to be older than the average reditor as well as techy. Can’t speak for mastodon, honestly I find the culture on most instances I’ve seen to be kinda weird and unappealing but yes it seems to be older techy people as well there.
IIRC, it started of as a joke and an explicit nazi reference to make fun of PC gaming fanboys, and then they just embraced it without understanding the context?
I think that it’s quite bad if Microsoft puts peoples family photos on their servers without the user realizing it. That’s not a niche privacy nerd sentiment, I think that a lot of people would find that creepy. Having the option easily available can be really good for a lot of non-techy people but it should be very clear what stays on your computer and what doesn’t, and how to keep something private if you want to, which I’m not sure that it is if Microsoft quietly backs up Documents, Pictures etc.