• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I think something that contributes to people talking past each other here is a difference in belief in how necessary/desirable revolution/overthrow of the U.S government is. Like many of the people who I’ve talked to online, who advocate not voting and are also highly engaged, believe in revolution as the necessary alternative. Which does make sense. It’s hard to believe that the system is fundamentally genocidal and not worth working within (by voting for the lesser evil) without also believing that the solution is to overthrow that system.

    And in that case, we’re discussing the wrong thing. Like the question isn’t whether you should vote or not . it’s whether the system is worth preserving (and of course what do you do to change it. How much violence in a revolution is necessary/acceptable). Like if you believe it is worth preserving, then clearly you should vote. And if you believe it isn’t, there’s stronger case for not voting and instead working on a revolution.

    Does anyone here believe that revolution isn’t necessary and also that voting for the lesser isn’t necessary?

    The opposite is more plausible to me: believing in the necessity of revolution while also voting

    Personally I believe that revolution or its attempt is unlikely to effective and voting+activism is more effective, and also requires agreement from fewer people in order to progress on its goals. Tragically, this likely means that thousands more people will be murdered, but I don’t know what can actually be effective at stopping that.




  • I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.

    These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.