Just checked, and unfortunately no, Wayland is still in preview.
Just checked, and unfortunately no, Wayland is still in preview.
I think Flutter and Avalonia both tick all those boxes.
I spend much time splitting them up inside visual studio by file and individual lines changed to try and separate my many simultaneous changes into several somewhat usable commits. If I was stupid enough to make some big refactor at the same time I might just have to throw in the towel… It’s really painful after a few weeks to try and pick up the pieces of what I was doing but never commited too lol.
My butterfly was having a bad day so I can’t be sure, sorry
No, please tell the user. They’ve got their big boy pants on and can handle seeing one or two weird squiggles in the worst case, and might be able to actually diagnose and fix the issue themselves (without having to go through support) in the best case.
The last panel is infinitely more readable than parsing the whole chunk of logic above. Maybe you’re just not used to this language’s (I think this meme used C#) null operators.
NOOOOOOO NOT THE FUCK W*RD!
Precision always degrades
It’s just… Why?
Was there a thought process applied here at all? Worse still is that many of these localised paths are actually lies. They still use the original developer version in order to not break compatibility with programs, but refuse to admit it in the explorer. It’s maddening.
My gripe: I hate when people make stupid Lemmy comments.
Look I made a funny!
(The point is to show that all gripes are not automatically jokes…)
Translation of developer utilities themselves is the final layer of hell. I’m not hearing anybody out about this kinda stuff - after microsoft decided to TRANSLATE THE EXCEPTION MESSAGES IN .NET WITH NO WAY TO BYPASS IT making them unclear, unusable and ungoogleable, I realized what a terrible idea it is to fragment developer knowledge by language.
Let’s just stick to a lingua franca, please.
The docs for C# are stellar imo
Emphasised “continue” or “default” buttons have been around for a long time. In a software installer, nonstandard options are often less emphasised than the standard ones. For instance when choosing an installation location it makes sense for the default option, which is fine for most users, to be emphasized. If the continue and change location buttons were equally prominent the user might believe that a choice must be made here or that you are expected to choose a location. The experience of installing is more streamlined, less confusing for the less technically proficient, and requires less cognitive load when emphasis is used well.
As I said in an earlier comment, something being a dark pattern is entirely a matter of context. If used to encourage the user to shell out for gems in a mobile game, it’s a dark pattern. If used to make user experience better, it’s just good UX.
I agree with you largely. It isn’t always a dark pattern. It is a dark pattern if it’s used shadily or maliciously, for example to trick you into downloading adware in an installer. It’s not a dark pattern, but rather good UX design if it’s used in a context to indicate a likely default choice, for instance:
We’ve detected your system is set to Dutch. Is Dutch your preferred language?
[No, let me change] [Looks good]
Maybe someone else has other examples of good uses. It’s not appropriate everywhere.
You need something to download Firefox from right?
Edge isn’t terrible at all. That’s why it’s such a risk to browser diversity and competition
That unlike teams, which they didn’t bother to build for windows and instead used a webapp, they actually bothered to use their own ui tools on their own operating system for a change? (But I guess they only did that so that teams could be a webapp, based on edge…)
Wrong. Sounds like you think only fixed point/precision could be implemented in decimal. There’s nothing about floating point that would make it impossible to implement in decimal. In fact, it’s a common form of floating point. See C# “decimal” type docs.
The beginning of the Wikipedia article on floating point also says this: “In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common.” (https://en.m.wikipedia.org/wiki/Floating-point_arithmetic) Also check this out: https://en.m.wikipedia.org/wiki/Decimal_floating_point
Everything in my comment applies to floating point. Not fixed point.
To everyone commenting that you have to convert to binary to represent numbers because computers can’t deal with decimal number representations, this isn’t true! Floating point arithmetic could totally have been implemented with decimal numbers instead of binary. Computers have no problem with decimal numbers - integers exist. Binary based floating point numbers are perhaps a bit simpler, but they’re not a necessity. It just happens to be that floating point standards use binary.
Nah programming is awesome 😎
The guy only looks unhappy on the outside, inside he’s pleased to be programming lol