• silasmariner@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Well it depends how much data integrity is worth to you, and how your system works. Every write in postgres is already a transaction - when you can get away with simple crud stuff, often there’s nothing to do, you have transactionality already. Transaction isolation levels are where db operation costs might change under concurrent conflicting writes but you can tune that by ensuring single-writer-per-partition or whatever in your server logic and it might add a ms or two. OTOH if you have heavy contestation it can be much more expensive. The performance implications are complicated but can certainly kept to a fraction of overall cost depending on your workload!

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Again, not data integrity (Error correction) but consistency (aCid). Adding two milliseconds to a half a millisecond operation is by no means cheap…

        • silasmariner@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          But adding it to an 80ms operation is. If your operation is 0.5ms it’s either a read on a small table, or maybe a single write – transaction isolation wouldn’t even be relevant there. You’re right that I did mean consistency rather that integrity though, slip of the terminology, but not really worth quibbling over. The point I meant was that I like my data to make sense, a funny quirk of mine.

          • Tja@programming.dev
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            If your single operations take 80ms either it’s a toy app or someone didn’t do their job (unoptimized queries, wrong technology, wrong modeling, etc).

            • silasmariner@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              19 hours ago

              Lol what an absurd take. A transaction is a sequence of operations, not a single one, so even small tables can meet that threshold with enough query logic. I guess you’re unfamiliar with medium to large datasets, but it’s not uncommon to use the aggregate functions that SQL provides in real world situations, and on large tables that can easily reasonably exceed 1s. Toy my arse. Go play with yourself

              Although this is no surprise tbh because apparently you don’t understand why transactions are even necessary. Benchmarks shmenchmarks. Whether it works is more important.

              I do not apologise for the downvote because this is smug shit only a junior would say

              • Tja@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                9 hours ago

                Exactly my point. Wrong technology. If your operation takes more than 1s and you just accept it, you are not very good at your job. No worries, more job security for me!

                I would recommend reading a few books, but given that you couldn’t even read my first post about when to use transactions, it’s futile.