• 2 Posts
  • 790 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle



  • I don’t know much Japanese, but the bits I do know suggest it’s a very different language than English. Not just different sounds, but also just a different approach to expressing things. Like, I think instead of saying “I’m hungry”, they just say “hungry!” Presumably though, they do use “I” when it’s needed for disambiguation.

    For, example, if you’re with a friend and someone asks “are you guys college students?” The response would probably be something like “He is but I’m not”, right?




  • I’ve heard, and I don’t know if this is true, that voice actors who specialize in narrating books have to be superstars at this. Not only are they expected to be able to sight-read an entire book without making mistakes, they also need to do the required acting so exciting scenes are exciting, happy scenes are happy, gloomy scenes are gloomy, etc. Plus, as they come across new characters in the book, they’re supposed to be able to give them distinct voices and remember and recreate those voices as they show up later in the book.

    Of course, a blockbuster book with a big budget for the audio version won’t have an actor wing it. They’ll be able to pay to have an actor and a director read the book first, and then have the director work with the actor to tease out the best possible performance. But, for a smaller budget, you have to deal with tighter margins so every second in the voice over booth counts.


  • One thing I love doing is to learn to say “I don’t speak <language>” as well as possible in a language I don’t speak. If you’re good enough at it, people will assume it’s a joke and try to speak to you in that language you don’t actually know. Apparently I’m pretty good at saying it in Portuguese, but I wouldn’t know.









  • So closer to average human intelligence than it would appear

    No, zero intelligence.

    It’s like how people are fooled by optical illusions. It doesn’t mean optical illusions are smart, it just means that they tickle a part of the brain that sees patterns.

    a paper on a new architecture they developed that has serious promise

    Oooh, a new architecture and serious promise? Wow! You should invest!

    The metric is the code. We can look at the code, see what kind of mistakes it’s making

    No, we can’t. That’s the whole point. If that were possible, then companies could objectively determine who their best programmers were, and that’s a holy grail they’ve been chasing for decades. It’s just not possible.

    and then alter the model to try to be better

    Nobody knows how to alter the model to try to be better. That’s why multi-billion dollar companies are releasing new models that are worse than their previous models.

    Maybe it’s next month

    It’s definitely not next month, or next year, or next century. Nobody has any idea how to get to actual intelligence, and despite the hype progress is as slow as ever.

    Every new development could be the big one

    Keep drinking that kool-aid.


  • It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task.

    That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.

    We know that it can output code, which means we have a quantifiable metric to make it better at coding

    How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.

    It’s not if we’re going to get a decent coding AI, it’s when.

    The year 30,000 AD doesn’t count.


  • AI can’t write programs, at least not complex programs. The programs / functions it can write well are the ones that are the ones that are very well represented in the training data – i.e. ultra simple functions or programs that have been written and re-written millions of times. What it can’t do is anything truly innovative. In addition, it can’t follow directions, it has no understanding of what it’s doing, it doesn’t understand the problem, it doesn’t understand its solution to the problem.

    The only thing LLMs are able to do is create a believable simulation of what the solution to the problem might look like. Sometimes, if you’re lucky, the simulation is realistic enough that the output actually works as a function or program. But, the more complex the problem, or the more distant from the training data, the less it’s able to simulate something realistic.

    So, rather than building a ladder where the rungs turn into propellers, it’s building one where the higher the ladder gets, the less the rungs actually look like rungs.