• 2 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle


  • Splunk is already very expensive to be honest, with their policy of charging based on indexed logs (hit by searches) as opposed to used logs, and the necessity for a lot of logs to be indexed for 'in case something breaks '. Bit of hearsay there - while I don’t work for the team that manages indexing, I have had quite a few conversations with our internal team.

    I was surprised we were moving from splunk to a lesser known proprietary competitor (we tried and gave up on elasticsearch years ago). Splunk is much more powerful for power users , but the cost of the alternative was 7-10 times less, and most users didn’t unfortunately use splunk power user functionality to justify using it over the competitor.

    Being a power user with lots of dashboards, my team still uses splunk for now, and I have background conversations to make sure we don’t lose it, I think Cisco would lose out if they jacked up prices, I think they’d add value to their infrastructure offerings using splunk as an additional value add perhaps?



  • As someone who has done a lot of debugging in the past, and has also written many log analysis tools in the past, it’s not an ether/or, they complement each other.

    I’ve seen a lot of times logs are dismissed in these threads recently, and while I love the debugger (I’d boast that I know very few people who can play with gdb like I can), logs are an art, and just as essential.

    The beginner printf thing is an inefficient learning stage that people will get past in their early careers after learning the debugger, but then they’ll need to eventually relearn the art of proper logging too, and understand how to use both tools (logging and debugging).

    There’s a stage when you love prints.

    Then you discover debuggers, you realize they are much more powerful. (For those of you who haven’t used gdb enough, you can script it to iterate stl (or any other) containers, and test your fixes without writing any code yet.

    And then, as your (and everyone else’s) code has been in production a while, and some random client reports a bug that just happened for a few hard to trace events, guess what?

    Logs are your best friend. You use them to get the scope of the problem, and region of problem (if you have indexing tools like splunk - much easier, though grep/awk/sort/uniq also work). You also get the input parameters, output results, and often notice the root cause without needing to spin up a debugger. Saves a lot of time for everyone.

    If you can’t, you replicate, often takes a bit of time, but at least your logs give you better chances at using the right parameters. Then you spin up the debugger (the heavy guns) when all else fails.

    It takes more time, and you often have a lot of issues that are working at designed in production systems, and a lot of upstream/downstream issues that logs will help you with much faster.