Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is “dangerous malware” that’s secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.

Griffin cited research and media reports exposing Temu’s allegedly nefarious design, which “purposely” allows Temu to “gain unrestricted access to a user’s phone operating system, including, but not limited to, a user’s camera, specific location, contacts, text messages, documents, and other applications.”

“Temu is designed to make this expansive access undetected, even by sophisticated users,” Griffin’s complaint said. “Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place.”

  • Thevenin@beehaw.org
    link
    fedilink
    arrow-up
    17
    ·
    4 months ago

    I’d believe it because I remember the same being true for TikTok.

    I don’t have the links on me right now, but I remember clearly that when tiktok was new, engineers trying to figure out what data it collected found that the app could recognize when it was being observed, and would “rewite” itself to evade detection.

    They noted that they’d never seen this outside of sophisticated malware, and doubted that a social media company had the resources to write such a program.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      17
      ·
      4 months ago

      doubted that a social media company had the resources to write such a program.

      Em… writing a different manifest and asking the OS to reinstall itself, is not rocket science. Detecting that it’s running in a testing environment and not asking for permission to access some types of data, is also quite easy. Downloading a different update or modules depending on which device and environment it gets installed to, is basic functionality.

      It’s still sneaky behavior and a dark pattern, but come on.

      • t3rmit3@beehaw.org
        link
        fedilink
        arrow-up
        18
        ·
        4 months ago

        Uh, as someone who does malware analysis, sandbox detection is not easy, and is certainly not something that a non-malware-developer/analyst knows how to do. This isn’t 2005 where sandboxes are listing their names in the registry/ system config files.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          I haven’t done sandbox detection for some years now, but around 2020, it was already “difficult” as in hard to write from scratch… yet already skid easy as in “copy+paste” from something that does it already. Surely newer sandboxes take more stuff into account, but at the same time more detection examples get published, simply advancing the starting point.

          So maybe TikTok has a few people focused on it, possibly with some CI tests for several sandboxes. I don’t think it’s particularly hard to do 🤷

      • Thevenin@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        I found at least one of the posts, and you’re right, that’s not really what impressed them. It just stuck with me because I’m a hardware girl.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          12
          ·
          edit-2
          4 months ago

          There is some irony to be had, in discussing this stuff on a page that starts by asking me to login, then to be good and disable my ad blocker, only to proceed with keeping half the text of the article as images so you can’t copy+paste it… and even all the comments!

          Anyhow…

          https://www.boredpanda.com/tik-tok-reverse-engineered-data-information-collecting/?utm_source=twitter&utm_medium=social&utm_campaign=organic

          😈 Thanks for telling us where you got the link from, I didn’t really care. 😁

          Static backup (possibly): https://archive.is/UD2SA

          *Phone hardware (cpu type, number of course, hardware ids, screen dimensions, dpi, memory usage, disk space, etc)

          Check out: https://amiunique.org/fingerprint

          No app needed!

          Using that as a baseline… the CPU type, memory usage, disk space, etc. are some extra data points freely available to all apps.

          A developer can distribute an app with multiple versions, some targeting more modern and capable devices, some older and more limited. It’s a feature, not a bug!

          *Other apps you have installed (I’ve even seen some I’ve deleted show up in their analytics payload - maybe using as cached value?)

          This is overreaching for an app that has nothing to do with managing other apps. Still, you may want some app with those capabilities… so let’s call it “sus”.

          *Everything network-related (ip, local ip, router mac, your mac, wifi access point name)

          Your IP is… well, you’re using it to connect, they will see it, duh.

          The rest is overreaching and comes into PI violation terrain, but can be used for geo location… the OS does it, that’s the data it uses to fine-tune the GPS’s location.

          *Whether or not you’re rooted/jailbroken

          Typical feature for banking ad DRM protected apps. Nothing to see here.

          *Some variants of the app had GPS ping- ing enabled at the time, roughly once every 30 seconds - this is enabled by de- fault if you ever location-tag a post IIRC

          Best answered by a comment [1] (SEE BELOW).

          TL;DR: more DRM stuff.

          *They set up a local proxy server on your device for “transcoding media”, but that can be abused very easily as it has zero authentication

          This is somewhat sus, but a local proxy by itself, doesn’t mean any sort of risk, or that it could be exploited.

          For example, Tor can be accessed using a local proxy (although VPN mode is safer).

          The scariest part of all of this is that much of the logging they’re doing is remotely configurable,

          Not exactly. It’s how feature flags, and remote testing/debugging works too.

          and unless you reverse every single one of their native libraries (have fun reading all of that assembly, assuming you can get past their customized fork of OLLVM!!!) and manually inspect every single obfuscated function.

          This is worse (why do they use a custom OLLVM fork?), and obfuscation usually means they have something to hide. It’s the opposite of security for the user.

          They have several different protections ir. place to prevent you from reversing or debugging the app as well. App behavior changes slightly if they know you’re trying to figure out what they’re doing.

          Not good, but unfortunately allowed. That behavior is shared by both DRM protected software, and malware.

          There’s also a few snippets of code on the Android version that allows for the downloading of a remote zip file, unzipping it, and executing said binary. There is zero reason a mobile app would need this functionality legitimately.

          False.
          There are two legitimate reasons: plugins, and DLCs.

          It can be used for shady stuff, but is also a “feature, not a bug”.

          On top of all of the above, they weren’t even using HTTPS for the longest time. They leaked users’ email addresses in their HTTP REST API, as well as their secondary emails used for password resets. Don’t forget about users’ real names and birthdays, too. It was alllll publicly viewable a few months ago if you MITM’d the application.

          Well, that’s just stupid, there is zero reason to send data unencrypted.

          They encrypt all of the analytics requests with an algorithm that changes with every update (at the very least the keys change) just so you can’t see what they’re doing.

          Ehm… this is the correct behavior. See previous point.

          They also made it so you cannot use the app at all if you block com- munication to their analytics host off at the DNS-level.

          Sus… but see the introductory part of this comment. Should boredpanda also be banned?

          TikTok put a lot of effort into preventing people like me from figuring out how their app works. There’s a ton of obfuscation involved at all levels of the application, from your standard Android variable renaming grossness to them (bytedance) forking and customizing ollvm for their native stuff. They hide functions, prevent debuggers from attaching, and employ quite a few sneaky tricks to make things difficult. Honestly, it’s more complicated and annoying than most games I’ve targeted,”

          This is bad, and a reason to use FLOSS apps… but since it’s been an accepted behavior for Privative Software, along with DRM… don’t blame the player, blame the game.

          No, seriously, blame the DMCA and friends. There is no way to at the same time “enforce DRM, keep a copy of all keys at a trusted third party, and keep users secure”… so the current situation is “you get none of those”.


          [1]

          sr71Girthbird 39 points 1 day ago

          Not OP but I work at a company providing video infrastructure, and one of our products is an analytics suite. It provides all the data he men- tioned and ton more. Turner, Discovery, New York Times, Hulu, and everyone’s favorite company, MindGeek all use our Analytics, among hundreds of other large customers. Specifically where this guy says, “Some variants of the app had GPS pinging enabled at the time, roughly once every 30 seconds” that’s called a heartbeat. The app or video player within the app has to have a heart- beat so that the player can detect if a viewer is still watching video etc. Our analytics + video player services send a regular heartbeat every 8 seconds. It definitely pulls in your exact location.