Early morning musings (warning: heavy outside stuff)
-
Go watch a whole lotta racing first.
Done? Done. Now for the main:
The thorniest subject is often the one that seems the most sensible and "right". Case in point: this move by Apple.
I've actually read a few more comments and explainers in other forums about how exactly Apple will implement that system, but it's a "give them an inch" situation to me. Still, as I'm typing this now, there's this nagging thought at the back of my head that says "oh, but if you say so-and-so, that must mean you're a pee-doe!!! You're a good person, right? And you have nothing to hide."
Thing is, I don't. But lying is easier than baring yourself naked. More than the mistakes and potential for sabotage (trillion-to-one is still greater than zero), solving child abuse is at once easy to do (be a better parent) and so case-by-case that a general solution will trigger enough false positives to start a chilling effect on ordinary people. It's punishing regular folks who otherwise abide by the law and are decent to catch a few monsters in our midst. It's breaching someone's right to be left alone and not wanting to be barged into for no reason.
-
@wheelerguy Part of me wonders how many requests Apple was fielding from local LEO and the FBI for access to a suspected pedophiles iPhone. Maybe they just said screw it, if we make our system unfriendly to those types then it'll push them over to Android or normal PC's where it'll be easier for the LEO to secure a warrant. That way it gets the government off their back and they don't have to give up their ethos of customer privacy. At least privacy against outsiders as Apple likes to keep all their crap in house and is pretty damn good at it.
I will add that I don't have kids so I can't bring that perspective into the conversation. I do have friends with kids of the early teen age. The shit they think is "ok" to send through messaging/social apps blows my mind. Of course most of them aren't sending those kinds of things through the native messenger app. However, it is easier for parents to control what apps their kids have installed on their iPhone.
Do I like the thought of a company snooping through my personal photos? Not really. But I already give out so much of my personal information to companies without even blinking an eye. All for the sake of convenience. Now if Apple was a quasi-public corporation... yea I would have much more of a problem with that.
I can see where some can take the slippery slope stance on it. Does that make them a pedo? No, they just like privacy or at least the illusion of it and that's ok. I personally am not much for slippery slope arguments as I tend to think there is actually multiple humans involved in a process and those humans have varying lines in the sand. Those lines generally shift with the prevailing winds.
-
This is just a trial run for the future. Will Apple implement something similar in China, where they scan for photos or memes related to protests, calls for democracy or the Uyghurs? Will Apple download hashes for anti-Putin photos to phones of Russian users?
How does this not violate the 4th amendment in the U.S.? I guess it's part of the terms of use you agree to when you accept Apple's license agreement. I guess it's clear that you don't own your iPhone. Apple owns it and you lease it under their terms and conditions.
One question is whether "right to repair" laws will allow you to jailbreak your iPhone without Apple being able to cancel your warranty.
-
@roadkilled 4th amendment doesn't apply against companies vs citizens though. As long as it's in their TOU they are in the clear from other privacy suits.
I mean you can't download a lot of things via torrents without the risk of a cease and desist letter from a rights holder or your service provider and a risk of further exposure to a lawsuit if you continue. I know I have certainly got a few warnings. A private company can monitor where content goes and enforce rules or laws upon you. The 4th amendment is no help there.
Now Per Apple's Technical Summary:
"CSAM Detection enables Apple to accurately identify and report iCloud users who store
known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts."So if I am reading this summary correctly, which it is very possible I am not, is that Apple assigns a hash to an image on your phone. That same algorithm has assigned hashes to images in a known Child Sexual Abuse Material database. Through a bunch of nerd stuff it has to link a random hash of your image on your phone to one in a database at the moment you attempt to upload to iCloud or send via messages app. If there is no match then nothing happens. No manual review and it's just another bit of metadata assigned to an image. So basically that specific content has to already exist in a database. It looks like a tool to suppress proliferation of existing content and does nothing to prevent novel cases.
Now, can it be used for other purposes in the future? Perhaps it could be adapted. It seems to work by specific images though. It doesn't say what percentage of content has to be known. So basically a cropped image of an existing photo in the database can still have the same hash so it can be identified. Can it detect a phrase like "fuck Putin" spray painted on a sign in a database and then link it to protesters writing the same thing on their signs with different locations and taking those photos on their phone? I don't know. Memes certainly would be easier.
I can understand the concern, but I don't get the outrage some people are having.
-
@thebarber My 4th Amendment question is based on whether Apple notifies authorities based on what happens on your phone. If Apple is doing this entirely of their own volition, it's a private company issue based on their user agreement. If Apple is doing this because of police agencies asking them to, it might be a different issue. It's unclear to me why Apple started this process. Is there some government request behind it?
I'm just wondering how this will be used in the future. I hate "slippery slope" arguments, and I realize that this is exactly what I'm doing here.
I think that the big tech companies have amassed an enormous amount of power and control. Maybe it's time for a privacy law stating what limits tech companies have. That would at least delineate what is permissible and what isn't Europe is moving in that direction, although I think that their laws could use some adjustments.
-
@wheelerguy Apple will never find my quick bits
-
@wheelerguy This is kinda scary. So no more pictures of the fam at the beach, or swimming pool. Baby pics will be tough.
-
I feel the lede is largely buried in a lot of this coverage and what has been released about the technologies involved so far. There's plenty of time spent on the photo hash matching topic, which isn't entirely awful in the confines of a specific cloud storage solution, but not nearly as much is being said about the other more potentially troublesome parts:
In addition to scanning devices for images that match the CSAM database...
Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit.
"on-device machine learning" ... For starters, saying "on-device" is a good way to pretend like something is "isolated" when it actually isn't. Machine learning does not spring fully formed from a single device, nor does it without human influence. Training data and other guidelines must be provided to get that ball rolling.
This is a topic still rife with issues. Tech companies have had numerous problems avoiding bias and other unintended outcomes creeping in through various avenues when handing the reins off to "AI" or other machine learning algorithms. This is a pretty good explainer regardless of your technical skill level:
One example that immediately jumps to mind for me here is LGBTQ+ content. Its inherent adjacency to the topic of sex has it often running afoul of these sorts of detection algorithms. In this particular application, I can't help but wonder about the risks of "outing" a teen or shutting down venues of outside communication on that topic in a home situation that might not be safe for it.
AI and Machine Learning is also a very convenient scapegoat if/when something goes wrong. Past a certain point, there's plenty of potential for it to become a black box too complex (read: expensive and/or buried under IP protections) to comprehensively audit how or why it acted in a certain way. In other words, good luck holding anybody accountable.
Speaking of...
The feature is designed so that Apple does not get access to the messages
"We won't read your messages directly, so don't mind us over here developing this tech by which an algorithm that we don't directly control informs other parties that we don't control about what you're doing on your device."
And finally, there is of course the mountain of other concerns when it comes to circumventing encryption and opening up another venue for abuse by malicious entities, overly-aggressive corporations, or governments with less than stellar human rights track records. The EFF article mentioned in the video lays out the explanations a lot better than I can.
This sort of erosion in privacy is nothing new or exclusive to Apple, but it never gets any less frustrating to encounter, or less worthy of being called out, no matter how honorable the pretenses behind it initially may be.