• jmcs@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      And infinitely lower reliability because you can’t have failovers (well you can, but people that run everything in the same host, won’t). It’s fine for something non critical, but I wouldn’t do it with anything that pays the bills.

      • tias@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I work for a company that has operated like this for 20 years. The system goes down sometimes, but we can fix it in less than an hour. At worst the users get a longer coffee break.

        A single click in the software can often generate 500 SQL queries, so if you go from 0.05 ms to 1 ms latency you add half a second to clicks in the UI and that would piss our users off.

        Definitely not saying this is the best way to operate at all times. But SQL has a huge problem with false dependencies between queries and API:s that make it very difficult to pipeline queries, so my experience has been that I/O-bound applications easily become extremely sensitive to latency.

        • dan@upvote.au
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          A single click in the software can often generate 500 SQL queries, so if you go from 0.05 ms to 1 ms latency you add half a second to clicks

          Those queries don’t all have to be executed sequentially though, do they? Usually if you have that many queries, at least some of them are completely independent of the others and thus can execute concurrently.

          You don’t even need threading for that, just non-blocking IO and ideally an event loop.