• 13 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • For me it was playing Life is Strange for the first time. I bought it because it had been listed on Steam as “Overwhelmingly Positive” for ages, and at the time I was really enjoying the story-based games that companies like Telltale were producing. So, knowing nothing about the game, I picked it up and started playing it.

    The first act was slow. What I didn’t realize at the time was that the writers were establishing Arcadia Bay, a city in the Pacific Northwest, as a character. All the people in it needed to be recognizable, so it took time for them to teach the player about who they were, what mattered to them, how they fit in to the city, and what their flaws were. I actually stopped playing for a while after the first act. But, luckily, I picked it back up over the holiday season.

    I still remember playing it in my living room. I was so thoroughly absorbed into the story that when something tense happened in the second act and I couldn’t stop it the way I normally could, I was literally crushing the controller as if I could make things work by pulling the triggers harder.

    I am decidedly not the demographic that Life is Strange was written to appeal to, but they did such a good job writing a compelling story that it didn’t matter. I got sucked in, the characters became important to me, and I could not. put. it. down. I played straight through a night until I finished it.

    (If you’ve played it and you’re wondering, I chose the town the first time I played it.)

    I’ll never forget that game. I’ll also never forget the communities that spawned around it. I read the accounts of people who had just played it for the first time for about a year because it helped me relive the experience I had when I played it. It was incredible.






  • Regardless of whether or not any of the titles do or do not contain said content, ChatGPT’s varying responses highlight troubling deficiencies of accuracy, analysis, and consistency. A repeat inquiry regarding The Kite Runner, for example, gives contradictory answers. In one response, ChatGPT deems Khaled Hosseini’s novel to contain “little to no explicit sexual content.” Upon a separate follow-up, the LLM affirms the book “does contain a description of a sexual assault.”

    On the one hand, the possibility that ChatGPT will hallucinate that an appropriate book is inappropriate is a big problem. But on the other hand, making high-profile mistakes like this keeps the practice in the news and keeps showing how bad it is to ban books, so maybe it has a silver lining.




  • Those safetensors files are all that I have ever used.

    For reference, I’m using a 2080 ti. That’s got about 11 GB of RAM, I think. I’m not having any freezes whatsoever. I’ve also tried it on my wife’s shiny new 4080. Definitely a speed difference, but again, no freezes or instability. Generating the 1024x1024 images does take forever. I actually went back to 512x512 and stayed there. I can always upscale something that I like.


  • I’m not an expert, but what I read said that you use SDXL by first using txt2img to generate an image using the base checkpoint, and then you send that image to img2img and use exactly the same prompt there with the refiner checkpoint.

    That makes for a longer workflow than I’m used to, so sometimes I just use one or the other in txt2img and see what I get. Sometimes I forget to change the model when I switch between img2img and txt2img, too. I always seem to get results of similar quality when I use just one of the checkpoints.

    It should be interesting to see what people come up with training their own checkpoints off of SDXL, though.