• 0 Posts
  • 168 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • Oh, big difference there, though. Suicide Squad actually IS a looter shooter driven by a wish to chase a business trend from five years to a decade ago. Guardians is a strictly single player Mass Effect-lite narrative action game (which yeah, given the material that fits).

    I’d be with you in the argument that it would have been an even better game without the Marvel license, because then they could have skipped trying to rehash bits from the movies’ look and feel, which are consistently the worst parts of the game. But then, without the license it would never have been made, so… make mine Marvel, I guess. Well worth it.


  • Nah, I’m mostly kidding. About the being my enemy part. The game is, in fact, awesome, and you should fetch it somewhere before the absolute nightmare of licensed music and Disney IP bundled within it makes it unsellable on any digital platform forever.

    Seriously, I bought a physical copy of the console version just for preservation, beause if you want to know what will be in the overprized “hidden gem” lists of game collectors in thirty years, it’s that.


  • Well, then you’re my enemy, because that game is great, Marvel connection or not. In fact it’s a fantastic companion piece ot the third Guardians movie, because they’re both really good at their respective medium but they are pushing radically oppposite worldviews (one is a Christian parable, the other a humanist rejection of religious alienation).

    And yeah, holy crap, they made a Marvel game about grief and loss and managing them without turning to religion and bigotry and it was awesome and beautiful and nobody played it and you all suck.


  • Well, it depends on when they cancelled it and on how much it cost. That thing didn’t sell THAT poorly, but Square, as usual, was aiming way above what’s realistic. Estimates on Steam alone put it above 1 million copies sold. You can assume PS5 was at least as good.

    Based on those same estimates it actually outsold Guardians. Which is an absolute travesty and I blame anyone who hasn’t played it personally.



  • I am honestly not super sure about this strategy of buying your way into being a major publisher by vacuuming up IP nobody else was bidding for. What did they think would happen? Did they think the old majors were leaving a ton of money on the table and then realized too late that these really weren’t that profitable? Or was it just a bid that the low interest rates would last forever and the portfolion would just pay for itself if they bundled it large enough?

    I don’t know what the business plan was meant to be, and it’s kinda killing me that I don’t fully grasp it.



  • To be clear about what I’m saying, the setup is subtitles in the same language as the audio. So if you’re learning French you set French audio with French subtitles.

    That REALLY helps bind the pronuntiation to the writing and it actually makes it far easier to understand the speech. Assuming you’re reading the subtitles at the same time, of course.

    You won’t understand a lot of it, and you’ll have to put up with the frustration of losing the plot often for a while, but it does help, in my experience.

    Subtitles in your own native language just make you tune out the audio and read the dialogue. That’s not helpful.


  • This is the answer. The answer is Netflix and Youtube. Anything with media using both audio and subtitles in the language you’re trying to learn.

    You still need a teacher to get you past learning enough basics of vocabulary and grammar to get started (and no, language learning apps are probably not an effective way past that) but once you have enough basic words and you understand how a sentence is put together the answer is to watch media even if you don’t fully understand what’s being said, paying attention and stopping sometimes to use dictionaries and translators to get you there on sentences you almost get.

    I know people who spent years spinning their wheels on learning apps while refusing to sit through media in the target language because they get frustrated or tired by the effort of trying to keep up. It’s a bit annoying, but it really works.


  • I don’t disagree on principle, but I do think it requires some thought.

    Also, that’s still a pretty significant backstop. You basically would need models to have a way to check generated content for copyright, in the way Youtube does, for instance. And that is already a big debate, whether enforcing that requirement is affordable to anybody but the big companies.

    But hey, maybe we can solve both issues the same way. We sure as hell need a better way to handle mass human-produced content and its interactions with IP. The current system does not work and it grandfathers in the big players in UGC, so whatever we come up with should work for both human and computer-generated content.


  • That’s not “coming”, it’s an ongoing process that has been going on for a couple hundred years, and it absolutely does not require ChatGPT.

    People genuinely underestimate how many of these things have been an ongoing concern. A lot like crypto isn’t that different to what you can do with a server, “AI” isn’t a magic key that unlocks automation. I don’t even know how this mental model works. Is the idea that companies who are currently hiring millions of copywriters will just rely on automated tools? I get that yeah, a bunch of call center people may get removed (again, a process that has been ongoing for decades), but how is compensating Facebook for scrubbing their social media posts for text data going to make that happen less?

    Again, I think people don’t understand the parameters of the problem, which is different from saying that there is no problem here. If anything the conversation is a net positive in that we should have been having it in 2010 when Amazon and Facebook and Google were all-in on this process already through both ML tools and other forms of data analysis.


  • I’m gonna say those circumstances changed when digital copies and the Internet became a thing, but at least we’re having the conversation now, I suppose.

    I agree that ML image and text generation can create something that breaks copyright. You for sure can duplicate images or use copyrighted characterrs. This is also true of Youtube videos and Tiktoks and a lot of human-created art. I think it’s a fascinated question to ponder whether the infraction is in what the tool generates (i.e. did it make a picture of Spider-Man and sell it to you for money, whcih is under copyright and thus can’t be used that way) or is the infraction in the ingest that enables it to do that (i.e. it learned on pictures of Spider-Man available on the Internet, and thus all output is tainted because the images are copyrighted).

    The first option makes more sense to me than the second, but if I’m being honest I don’t know if the entire framework makes sense at this point at all.


  • A lot of this can be traced back to the invention of photography, which is a fun point of reference, if one goes to dig up the debate at the time.

    In any case, the idea that humans can only produce so fast for so long and somehow that cleans the channel just doesn’t track. We are flooded by low quality content enabled by social media. There’s seven billion of us two or three billion of those are on social platforms and a whole bunch of the content being shared in channels is created by using corporate tools to make stuff by pointing phones at it. I guarantee that people will still go to museums to look at art regardless of how much cookie cutter AI stuff gets shared.

    However, I absolutely wouldn’t want a handful of corporations to have the ability to empower their employed artists with tools to run 10x faster than freelance artists. That is a horrifying proposition. Art is art. The difficulty isn’t in making the thing technically (say hello, Marcel Duchamp, I bet you thought you had already litgated this). Artists are gonna art, but it’s important that nobody has a monopoly on the tools to make art.


  • It’s not right to say that ML output isn’t good at practical tasks. It is and it’s already in use and has been for ages. The conversation about these is guided by the relatively anecdotal fact that chatbots and image generation got good so this stuff went viral, but ML models are being used for a bunch of practical uses, from speeding up repetitive, time consuming tasks (e.g. cleaning up motion capture, facial modelling or lip animation in games and movies) or specialized tasks (so much science research is using ML tools these days).

    Now, a lot of those are done using fully owned datasets, but not all, and the ramifications there are also important. People dramatically overestimate the impact of trash product flooding channels (which is already the case, as you say) and dramatically underestimate the applications of the underlying tech beyond the couple of viral apps they only got access to recently.


  • Yep. The effect of this as currently framed is that you get data ownership clauses in EULAs forever and only major data brokers like Google or Meta can afford to use this tech at all. It’s not even a new scenario, it already happened when those exact companies were pushing facial recognition and other big data tools.

    I agree that the basics of modern copyright don’t work great with ML in the mix (or with the Internet in the mix, while we’re at it), but people are leaning on the viral negativity to slip by very unwanted consequences before anybody can make a case for good use of the tech.


  • I think viral outrage aside, there is a very open question about what constitutes fair use in this application. And I think the viral outrage misunderstands the consequences of enforcing the notion that you can’t use openly scrapable online data to build ML models.

    Effectively what the copyright argument does here is make it so that ML models are only legally allowed to make by Meta, Google, Microsoft and maybe a couple of other companies. OpenAI can say whatever, I’m not concerned about them, but I am concerned about open source alternatives getting priced out of that market. I am also concerned about what it does to previously available APIs, as we’ve seen with Twitter and Reddit.

    I get that it’s fashionable to hate on these things, and it’s fashionable to repeat the bit of misinformation about models being a copy or a collage of training data, but there are ramifications here people aren’t talking about and I fear we’re going to the worst possible future on this, where AI models are effectively ubiquitous but legally limited to major data brokers who added clauses to own AI training rights from their billions of users.