• 75 Posts
  • 388 Comments
Joined 1 year ago
cake
Cake day: September 13th, 2024

help-circle

  • An extra hard drive for offline backup of my home server. Just knowing I have a cold, unplugged copy of my data in my drawer has made me less paranoid about accidentally “rm -rf”-ing my computer and taking all the mount points with it or my dog getting her paw caught on a wire (she likes to run around haphazardly and is pretty clumsy) and dragging the entire hard drive enclosure down with it.

    Ideally I wouldn’t keep that drive in my house but I don’t have anywhere else to put it. Maybe someday I’ll get a safe deposit box or something but then my lazy ass probably wouldn’t bother to retrieve and sync my data nearly as often.



  • An AGI wouldn’t need to read every book because it can build on the knowledge it already has to draw new conclusions it wasn’t “taught.”

    Also, an AGI would be able to keep a consistent narrative regardless of the amount of data or context it has, because it would be able to create an internal model of what is happening and selectively remember the most important things more so than things that are inconsequential (not to mention assess what’s important and what can be forgotten to shed processing overhead), all things a human does instinctively when given more information than your brain can immediately handle. Meanwhile, an LLM is totally dependent on how much context it actually has bufferered, and giving it too much information will literally push all the old information out of its context, never to be recalled again. It has no ability to determine what’s worth keeping and that’s not, only what’s more or less recent. I’ve personally noticed this especially with smaller locally run LLMs with very limited context windows. If I begin troubleshooting some Linux issue using it, I have to be careful with how much of a log I paste into the prompt, because if I paste too much, it will literally forget why I pasted the log in the first place. This is most obvious with Deepseek and other reasoning models because it will actually start trying to figure out why it was given that input when “thinking,” but it’s a problem with any context based model because that’s its only active memory. I think the reason this happens so obviously when you paste too much in a single prompt and less so when having a conversation with smaller prompts is because it also has its previous outputs in its context, so while it might have forgotten the very first prompt and response, it repeats the information enough times in subsequent prompts to keep it in its more recent context (ever notice how verbose AI tends to be? That could potentially be a mitigation strategy). Meanwhile, when you give it a very large prompt as big or bigger than its context window, it completely overwrites the previous responses, leaving no hints to what was there before.



  • Specific info, and I’m guessing really specific:

    One of the closest allies of the U.S., the U.K. has reportedly suspended sharing some intelligence with the Pentagon due to concern over the boat strikes in the Caribbean, according to CNN.

    […]

    In response to a request for comment from TIME, a U.K. government spokesperson said on Wednesday: “It is our longstanding policy to not comment on intelligence matters.”

    They went on to say that the “U.S. is our closest ally on security and intelligence. We continue to work together to uphold global peace and security, defend freedom of navigation, and respond to emerging threats.”

    Which makes me think they haven’t actually stopped sharing all that much