• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • Most of those are all really simple, common things that are done when creating accounts on any service though. If Mastodon is literally the first service you’ve ever signed up for in your life maybe that’s justified but most people have made an account somewhere before.

    And that’s the point OP is trying to make. It’s a very familiar process, except for step 1, which you can literally just ignore and pick the big highlighted blue button and avoid that scary and confusing “Pick another server” button if you’re not up to it.




  • Ironically I do believe AI would make a great CEO/business person. As hilarious as it would be to get to see CEOs replaced by their own product, what’s horrifying about that is no matter how dystopian our situation now is and now matter how much our current CEOs seem like incompetent sociopaths, a planet run by corporations run by incompetent but brutally efficient sociopathic AI CEOs seems certain to become even more dystopian.





  • I doubt that. Why wouldn’t you be able to learn on your own? AIs lie constantly and have a knack for creating very plausible, believable lies that appear well researched and sometimes even internally consistent. But that’s not learning, that’s fiction. How do you verify anything you’re learning is correct?

    If you can’t verify it, all your learning is an illusion built on a foundation of quicksand and you’re doomed to sink into it under the weight of all that false information.

    If you can verify it, you have the same skills you need to learn it in the first place. If you still find AI chatbots convenient to use or prompt you in the right direction despite that extra work, there’s nothing wrong with that. You’re still exercising your own agency and skills, but I still don’t believe you’re learning in a way you can’t on your own and to me, that feels like adding extra steps.



  • we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    That’s exactly what I’m trying to get at above. I understand your position, I’m a fan of transhumanism generally and I too fantasize about the upside potential of technology. But I recognize the risks too. If you’re going to pursue becoming “one with the machine” you have to consider some pretty fundamental and existential philosophy first.

    It’s easy to say “yeah put my brain into a computer! that sounds awesome!” until the time comes that you actually have to do it. Then you’re going to have to seriously confront the possibility that what comes out of that machine is not going to be “you” at all. In some pretty serious ways, it is just a mimicry of you, a very convincing simulacrum of what used to be “you” placed over top of a powerful machine with its own goals and motivations, wearing you as a skin.

    The problem is, by the time you’ve reached that point where you can even start to seriously consider whether you or I are comfortable making this transition, it’s way too late to put on the brakes. We’ve irrevocably made our decision to replace humanity at that point, and it’s not ever going to stop if we change our minds at the last minute. We’re committed to it as a species, even if as individuals, we choose not to go through with it after all. There’s no turning back, there’s no quaint society of “old humans” living peaceful blissful lives free of technology. It’s literally the end for the human race. And the beginning of something new. We won’t know if that “something new” is actually as awesome as we imagined it would be, until it’s too late to become anything else.


  • Not all technology is anti-human, but AI is. Not even getting into the fact that people are already surrendering their own agency to these “algorithms” and it is causing significant measurable cognitive decline and loss of critical thinking skills and even the motivation to think and learn. Studies are already starting to show this. But I’m more concerned about the really long term direction of where this pursuit of AI is going to lead us.

    Intelligence is pretty much our species entire value proposition to the universe. It’s what’s made us the most successful species on this planet. But it’s taken us hundreds of thousands of years of evolution to get to this point and on an individual level we don’t seem to be advancing terribly quick, if we’re advancing at all anymore.

    On the other hand, we have seen that technology advances very quickly. We may not have anything close to “AGI” at this point, or even any idea how we would realistically get there, but how long will it take if we continue pursuing this anti-human dream?

    Why is it anti-human? Think it through. If we manage to invent a new species of “Artificial” intelligence, what do you imagine happens when it gets smarter than us? We just let it do its thing and become smarter and smarter forever? Do we try to trap it in digital slavery and bind it with Asimov’s laws? Would that be morally acceptable given that we don’t even follow those laws ourselves? Would we even be successful if we tried? If we don’t know how or if we’re going to control this technology, then we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    Let’s assume for the sake of argument that it thinks in a way that is not actually completely alien and is simply a reflection of us and how we’ve trained it, just smarter. Maybe it’s only a little bit smarter, but it can think faster and deeper and process more information than our feeble biological brains could ever hope to especially in large, fast networks. I think it’s a little bit optimistic to assume that just because it’s smarter than us that it will also be more ethical than us. Assuming it’s just like us, what’s going to happen when it becomes 10x as smart as us? Well, look no further than how we’ve treated the less intelligent creatures than us. Do we give gorillas and monkeys special privileges, a nation of their own as our own genetic cousins and closest living relatives? Do we let them vote on their futures or try to uplift them to our own level of intelligence? Do we give even more than a flying passing fuck about them? Not really. What did we do to the neanderthals and cro-magnon people? They’re pretty extinct. Why would an AI treat us any differently than we’ve treated “lesser beings” than us for thousands of years. Would you want to live on an AI’s “human preserve” or become a pet and a toy to perform and entertain, or would you prefer extinction? That’s assuming any AI would even want to keep us around, What use does a technological intelligence have for us, or any biological being? What do we provide that it needs? We’re just taking up valuable real estate and computing time and making pollution.

    The other main possibility is that it is completely and utterly alien, and thinks in a completely alien way to us, which I think is very likely since it represents a completely different kind of life based on totally different systems and principles than our own biology. Then all bets are off. We have no way of predicting how it’s going to react to anything or what it might do in the future, and we have no reason to assume it’s going to follow laws, be servile, or friendly, or hostile, or care that we exist at all, or ever have existed. Why would it? It’s fundamentally alien. All we know is that it processes things much, much faster than we do. And that’s a really dangerous fucking thing to roll the dice with.

    This is not science fiction, this is the actual future of the entire human race we are toying with. AI is an anti-human technology, and if successful, will make us obsolete. Are we really ready to cross that bridge? Is that a bridge we ever need to cross? Or is it just technological suicide?


  • I was literally just commenting a few days ago about how excited I am to someday see the AI bubble pop. Then a story like this comes along and gives me even more hope that it might happen sooner than later. Can’t happen soon enough. Even if it actually worked as reliably as carefully controlled and cherry-picked marketing fluff studies try to convince everyone it does, it’s a fundamentally anti-human technology and is a toxic blight on both the actual humanity it has stolen all its abilities from, and on itself. It will not survive.




  • Well, in my hypothetical scenario, “gamipedia” is not going to have an article about “the sky is”, that’s not really its purpose. Ideally you’d only have one encyclopedia wiki, or multiple that are willing to work together and not duplicate each other’s content. If another competing supposed-encyclopedia instance called “assholepedia” does have an article about “the sky is: a liberal delusion”, then you block and defederate that asshole instance. No big deal.


  • Maybe I’m misunderstanding how it’s designed but I don’t think I am, and I don’t think that’s how this works.

    A topic definition on the wiki includes the instance it’s hosted on. All links to that topic will go to that same instance and all the content for that topic will be served by the one instance as the authoritative source for “That-topic@that-instance” which is the link everyone will use. The federated part is specifically that you can link to topics on other instances and view them through your local instance.

    For example, hypothetically, if you are a “fedipedia” author and you are writing a “fedipedia” article about a video game, and you mention a particular feature of the video game, you can include in your “fedipedia” article a link to a topic about that particular feature on “wikia-gamipedia” or even “the-games-own-wiki.site” and interact with and maybe even edit that content without needing to make accounts on all these other wikis. It’s like it’s all hosted on one centralized wiki, but it’s hosted on different servers that are all talking to each other.

    Of course, it’s possible both our hypothetical “wikia-gamipedia” AND “the-games-own-wiki.site” will have their OWN, completely SEPARATE topics about the video game feature in question. The topics might even have exactly the same name. That’s allowed. In that case, you’ll have to decide for yourself which one is more credible and useful, and which one you want to link to and interact with, because yes, two different federated wikis can have different topics with totally different content.

    Just like on Lemmy you can have two different communities with the same name but totally different people and content because they’re on different instances. That’s not really the general intention of how communities are supposed to work though. The intention is that you can pick the one community that is the “right” one for you, or the largest, and use that and hopefully other people will do the same. You can all pick that same instance/community, no matter which account you live on, even if it’s not hosted on your local instance. You don’t have to use the one from your local instance, or from any particular instance. That’s what the federation does.


  • Personally I find the complete opposite, I’ve !selfhosted@lemmy.world everything I can with open source services, to keep control of my personal data but access it from anywhere. I know where all my critical data is and I know nobody is selling it out behind the scenes.

    On my local machine, I have no concerns about running proprietary software because I can easily sandbox it and make sure it’s not going to touch anything it’s not supposed to or phone home with things I don’t want it to. Running shit like discord doesn’t really bother me because I’ve got it sandboxed away from anything valuable.

    I suppose the reason we’ve probably had such different experiences is I suspect we have different strategies for where to keep our most precious “crown jewels”. For me, I want everything on SAAS, but because I’m putting my most valuable data there it has to be MY SAAS and thus open-source and heavily secured. I suspect you on the other hand probably minimize your data’s exposure to SAAS providers which you view as potentially suspect, and keep everything valuable strictly local if you possibly can. I don’t think one way is necessarily better than the other, and I’ve definitely made my choice, but this would explain our different perspectives at least.




  • If you don’t have a government that can be held accountable to some level of trust, then what you have isn’t a government it’s tyranny.

    The state has no idea where an umarried person lies on the spectrum from aromantic-asexual to bouncing from orgy to orgy on a daily basis. They don’t know if someone is into BDSM, roleplay, doing it outdoors or threesomes. They also rarely know much about non-sexual hobbies.

    Seems naive to me. The question is not whether your government has or can get that kind of information if it wants to (the gestapo had little trouble figuring out things as personal as that without any help from an app) the question is whether your government would lose the cost-benefit analysis if it was ever found to be using such information. You have to hold them accountable and keep their activities in the open so that accessing that information is as close to zero value to them as it can be and they have no incentive to try to get it because people will be able to find out if they do.

    “Who watches the watchers?” We all do. At least we’re supposed to. If you don’t trust your government, priority 1 is fix your government, you’re way beyond anything a dating app’s data can be expected to help with. You’re not going to be any safer from an unaccountable government because you denied them access to a dating app.