![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
a) why the fuck would they go to that effort for a filthy commoner like yourself, and b) what are the chances that 0.01% of recoverable data contains anything useful!?!
Nobody is gonna bother doing advanced forensics on 2nd hand storage, digging into megabytes of reallocated sectors on the off chance they to find something financially exploitable. That’s a level of paranoia no data supports.
My example applies to storage devices which don’t default to encryption (most non-OS external storage). It’s analogous to changing your existing encrypted disks password to a random-ass unrecoverable throwaway.
I was thinking about this and imagined the federated servers handling the index db, search algorithms, and search requests, but instead leverage each users browser/compute to do the actual web crawling/scraping/indexing; the server simply performing CRUD operations on the processed data from clients to index db. This approach would target the core reason why search engines fail (cost of scraping and processing billions of sites), reduce the costs to host a search server, and spread the expense across the user base.
It also may have the added benefit of hindering surveillance capitalism due to a sea of junk queries from every client, especially if it were making crawler requests from the same browser (obviously needs to be isolated from the users own data, extensions, queries, etc). The federated servers would also probably need to operate as lighthouses that orchestrate the domains and IP ranges to crawl, and efficiently distribute the workload to client machines.