That makes me wonder! All these new GPU uses are enormous energy hogs. Is gaming like that too?
That makes me wonder! All these new GPU uses are enormous energy hogs. Is gaming like that too?
While I understand your point, there’s a mistake that I see far too often in the industry. Using Relational DBs where the data model is better suited to other sorts of DBs. For example, JSON documents are better stored in document DBs like mongo. I realize that your use case doesn’t involve querying json - in which it can be simply stored as text. Similar mistakes are made for time series data, key-value data and directory type data.
I’m not particularly angry at such (ab)uses of RDB. But you’ll probably get better results with NoSQL DBs. Even in cases that involve multiple data models, you could combine multiple DB software to achieve the best results. Or even better, there are adaptors for RDBMS that make it behave like different types at the same time. For example, ferretdb makes it behave like mongodb, postgis for geographic db, etc.
Nobody ever learned from the solarwinds attack. If a massive amount of your infrastructure is backed by some obscure software, bad actors will either try to insert a backdoor or find a zero-day exploit. If people are going to neglect what just happened, crowdstrike will fall heals up, faster than solarwinds did.
I don’t think that rust would have prevented this one, since this isn’t a compile time error (for the code loader).The address dereferencing would have been inside an unsafe block. What was missing was a validity check of the CI build artifacts and payload check on the client side.
I do however, think that the ‘fingers-crossed’ approach to memory safety in C and C++ must stop. Rust is a great fit for this use case.
I wish there was something more interesting to do there.
You are not expected to remember a v6 address - or even v4 for that matter. They are designed for machines. DNS is designed for humans.
What do you do on the yggdrasil network?
Python decided to use a single convention (semantic whitespace) instead of two separate ones for machine decodeable scoping and manual/visual scoping. That’s part of Python’s design principle. The program should behave exactly like what people expect it to (without strenuous reasoning exercises).
But some people treat it as the original sin. Not surprised though. I’ve seen developers and engineers nurture weird irrational hatred towards all sorts of conventions. It’s like a phobia.
Similar views about yaml. It may not be the most elegant - it had to be the superset of JSON, after all. But Yaml is a semi-configuration language while JSON is a pure serialization language. Try writing a kubernetes manifest or a compose file in pure JSON without whitespace alignment or comments (which pure JSON doesn’t support anyway). Let’s see how pleasant you find it.
I think assembly was easier back then. Some architectures still are. But many architectures like x86 got incredibly complicated.
I looked at the post again and they do talk about recursion for looping (my other reply talks about map over an iterator). Languages that use recursion for looping (like scheme) use an optimization trick called ‘Tail Call Optimization’ (TCO). The idea is that if the last operation in a function is a recursive call (call to itself), you can skip all the complexities of a regular function call - like pushing variables to the stack and creating a new stack frame. This way, recursion becomes as performant as iteration and avoids problems like stack overflow.
They aren’t talking about using recursion instead of loops. They are talking about the map method for iterators. For each element yielded by the iterator, map applies a specified function/closure and collects the results in a new iterator (usually a list). This is a functional programming pattern that’s common in many languages including Python and Rust.
This pattern has no risk of stack overflow since each invocation of the function is completed before the next invocation. The construct does expand to some sort of loop during execution. The only possible overhead is a single function call within the loop (whereas you could have written it as the loop body). However, that won’t be a problem if the compiler can inline the function.
The fact that this is functional programming creates additional avenues to optimize the program. For example, a chain of maps (or other iterator adaptors) can be intelligently combined into a single loop. In practice, this pattern is as fast as hand written loops.
Uuuh, am I no true Scotsman?
That’s a terrible and disingenuous take. I’m saying that you won’t understand why it’s useful till you’ve used it. Spinning that as no true Scotsman fallacy is just indicative of that ignorance.
You keep iterating the same arguments as the rest here, and I still adhere to my statement above: hardly anybody needs those tools.
And you keep repeating that falsehood. Isn’t that the real no true Scotsman fallacy? How do you even pretend to know that nobody needs it? You can’t talk for everyone else. Those who use it find it useful in several other ways that I and others have explained. You can’t just judge it away from your position of ignorance.
People are quick to judge without considering the circumstances. Wasn’t the customer’s attitude equally wrong? Aren’t you implying that the service person should have let her bully him?
Only users who don’t know rebasing and the advantages of a crafted history make statements like this. There are several projects that depend on clean commit history. You need it for conventional commit tools (like commitzen), pre-commit hook tools, git blame, git bisect, etc.
You can have both. I’ll get to that later. But first, let me explain why edited history is useful.
Unedited histories are very chaotic and often contains errors, commits with partial features, abandoned code, reverted code, out-of-sequence code, etc. These are useful in preserving the actual progress of your own thought. But such histories are a nightmare to review. Commits should be complete (a single commit contains a full feature) and in proper order. If you’re a reviewer, you also wouldn’t want to waste time reviewing someone else’s mistakes, experiments, reverted code, etc. Self-complete commits also have another advantage - users can choose to omit an entire feature by omitting a commit.
Now the part about having both - the unedited and carefully crafted history. Rebasing doesn’t erase the original branch. You can preserve it by creating a new branch. Or, you can recover it from reflog. I use it to preserve the original development history. Then I submit the edited/crafted history/branch upstream.
I agree that merge is the easier strategy with amateurs. By amateurs I mean those who cannot be bothered to learn about rebase. But what you really lose there is a nice commit history. It’s good to have, even if your primary strategy is merging. And people tend to create horrendous commit histories when they don’t know how to edit them.
Because their name is Elon Musk.
Leap years are not as bad as timezones, if you think about it. Timezones try to imperfectly solve a local problem - how to match your clock with the position of the sun. Leap years try to reasonably solve a global problem - how to keep your calendar aligned with the seasons.
You don’t have any evidence, much less anecdotal evidence. Things don’t become true just because you insist.
Solid state physics.