• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle











  • PixxlMan@lemmy.worldtoProgrammer Humor@lemmy.mlThe future is now
    link
    fedilink
    arrow-up
    193
    arrow-down
    4
    ·
    1 year ago

    Translation of developer utilities themselves is the final layer of hell. I’m not hearing anybody out about this kinda stuff - after microsoft decided to TRANSLATE THE EXCEPTION MESSAGES IN .NET WITH NO WAY TO BYPASS IT making them unclear, unusable and ungoogleable, I realized what a terrible idea it is to fragment developer knowledge by language.

    Let’s just stick to a lingua franca, please.



  • Emphasised “continue” or “default” buttons have been around for a long time. In a software installer, nonstandard options are often less emphasised than the standard ones. For instance when choosing an installation location it makes sense for the default option, which is fine for most users, to be emphasized. If the continue and change location buttons were equally prominent the user might believe that a choice must be made here or that you are expected to choose a location. The experience of installing is more streamlined, less confusing for the less technically proficient, and requires less cognitive load when emphasis is used well.

    As I said in an earlier comment, something being a dark pattern is entirely a matter of context. If used to encourage the user to shell out for gems in a mobile game, it’s a dark pattern. If used to make user experience better, it’s just good UX.


  • I agree with you largely. It isn’t always a dark pattern. It is a dark pattern if it’s used shadily or maliciously, for example to trick you into downloading adware in an installer. It’s not a dark pattern, but rather good UX design if it’s used in a context to indicate a likely default choice, for instance:

    We’ve detected your system is set to Dutch. Is Dutch your preferred language?

    [No, let me change] [Looks good]

    Maybe someone else has other examples of good uses. It’s not appropriate everywhere.






  • To everyone commenting that you have to convert to binary to represent numbers because computers can’t deal with decimal number representations, this isn’t true! Floating point arithmetic could totally have been implemented with decimal numbers instead of binary. Computers have no problem with decimal numbers - integers exist. Binary based floating point numbers are perhaps a bit simpler, but they’re not a necessity. It just happens to be that floating point standards use binary.