Art.22 ¶1 declares:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
without stating who is liable for infringements. Paragraph 3 says
the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
That assumes the data controller is aware of and in control of the AIDM. Often data processors implement AIDM without the data controller even knowing. Art.28 ¶1 says:
Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.
Of course what happens in reality is processors either make no guarantee or the guarantee is vague with no mention of AIDM. So controllers hire processors blindly. When the controller is some tiny company or agency and the processor is a tech giant like Microsoft or Amazon, it’s a bit rich to put accountability on the controller and not the processor. The DPAs don’t want to sink micro companies because of some shit Amazon did for which the controller was not even aware.
As a data subject I have little hope that a complaint of unlawful AIDM will play out. It’s like not even having protection from AIDM. Article 29 Working Party wrote AIDM guidelines in 2017, but they make no mention of processors.
When the controller is some tiny company or agency and the processor is a tech giant like Microsoft or Amazon, it’s a bit rich to put accountability on the controller and not the processor.
Can you give more context? Why not simply choose other 3rd parties?
Can you give more context? Why not simply choose other 3rd parties?
I’m not sure what you mean. Do you mean the data subject should choose a different controller, or that the controller should choose a different processor? Both such cases are consumer actions, which everyone in the world can do without a GDPR. But this does not make the GDPR redundant. The GDPR /theoretically/ ensures all market choices are up to a certain standard so we are not forced into a marketplace of all shit choices.
The insideous problem with AIDM is you often do not even know it’s in play. You don’t necessarily know that an adverse decision to deny you service was due to a robotic algorithm. Denials can do damage, after which point it may be too late to choose not to approach a controller. You don’t have all year to do trial and error with different suppliers.
We also have no other choice in some cases because monopolies exist. E.g. there may be only one credit bureau in a consumer’s country and it may be governmental (like a national bank). If that bank uses Cloudflare for their website, then Cloudflare’s AIDM denies some consumers web access to their credit worthiness records. The national bank may not even be aware of CF’s use of AIDM. But in any case, you cannot just choose a different supplier because it’s a monopoly.
Or if an important email to gov agency X is blocked because they use Microsoft and MS uses AIDM, you cannot simply change governments.
Do you mean the data subject should choose a different controller, or that the controller should choose a different processor?
The controller should chose a different processor, is what I meant.
In your examples: it’s the bank and the government agency that are not fulfilling their obligations towards their customers, so they should remedy that.
Yes, but I think you’ve missed the point. Indeed one course of action is to file a GDPR complaint against the small controller to force them to change suppliers. But note that GDPR penalties are limited to 4% of revenue and if the controller is a gov agency I don’t even know what determines the penalty. I have also noticed a reluctance of DPAs to act on complaints against other gov agencies.
When the processor is a tech giant Google, Microsoft, or Cloudflare, the AIDM abuse is centralised on them. There are thousands of small businesses and small gov agencies using the services of MACFANG (the various tech giants). It’s a bit misguided to put accountability on each small business who does not even necessarily know the processor they outsourced to uses unlawful AIDM. It would be far more sensible to hit Microsoft or Cloudflare with the liability rather than have a separate article 77 complaint against all the small users.
who does not even necessarily know the processor they outsourced to uses unlawful AIDM.
They should! That’s the point! They shouldn’t use bad products, regardless of if it’s home made, from a small 3rd party, or a large 3rd party.
It would be far more sensible to hit Microsoft or Cloudflare with the liability
Why is that? It’s not cloudflare’s responsibility if a, to them, 3rd party illegally uses their services.
If a restaurant buys nails and puts it in their food, it’s not the nail manufacturer that’s at fault. The argument “but it’s a large nail manufacturer” doesn’t take away one’s own responsibility.
They should! That’s the point! They shouldn’t use bad products, regardless of if it’s home made, from a small 3rd party, or a large 3rd party.
Yes they should, but investigative journalists are not a competent way to have that information disclosed. When the processor secretly uses AIDM and conceals that from the controller, holding the controller EXCUSIVELY¹ responsible is reckless because the controller does not have right to inspect the servers and code of the processor. It’s a black box. The GDPR requires processors to disclose a lot of GDPR factors in their contract with the controller. But AIDM is not one of them. It is perfectly legal for a processor to (e.g.) write an algorithm that treats black people different, and not tell the controller. Putting the responsibility on controllers to investigate and discover unlawful practice is not a smart system.
If a restaurant buys nails and puts it in their food, it’s not the nail manufacturer that’s at fault. The argument “but it’s a large nail manufacturer” doesn’t take away one’s own responsibility.
For this analogy to work, the nail mfr would know that the nails are being put in the food. With knowledge comes responsibility. If the nail manufacturer is aware of the misuse, the nail mfr is willfully complicit in the abuse. But also to make the analogy work, the restaurant would have to be also unaware that the nails were ending up in the food (because AIDM is undisclosed in the case that you are trying to make an analogy for).
(update) Europe does not have the machinery to bring thousands of small mom and pop shops into court. It just makes no sense from a logistical standpoint and it’s a non-starter economically. Though I do not oppose controllers having liability. They should retain liability. But processors should also have liability, when you have one giant processor who is the cause of hundreds of thousands of people’s rights being infringed by way of thousands of controllers. To neglect the giant is to fail at data protection.
¹ added that word late! Controllers should be accountable, but not exclusively.
I think you’re approaching this from wrong trust model. You’re trying to answer: “how can I know if the 3rd party I’ve chosen operates legally?”
The answer is always you don’t, untill you’ve been given sufficient evidence that they do. The restaurant should not put ingredients into their food that they don’t know is safe for consumption. The website operator should not integrate with 3rd parties unless they’ve proof there’s no illegal behaviour going on.
You don’t need an investigative journalist. It’s clear from the get-go that a closed source US product is a black box that you shouldn’t integrate with. Just as it’s clear from the start that you shouldn’t put nails into a spaghetti.
It’s a black box. You can’t know what you don’t know when the information is concealed. Blackboxes can be tested (we call it blackbox testing). But it is inferior to clearbox testing. It’s too costly and ineffecient to wholly rely on. The giant processor has the resources to disclose their use of AIDM. The micro-controller (as in small data controller) does not have the resources to exhaustively simulate hundreds or thousands of demographics of people. They don’t even have the competency to be aware of all the demographics. It’s guesswork and it’s a non-starter. If the controller had that kind of resources, they would not be outsourcing the first place. Not only is it impractical, it’s also inefficient. To have thousands of small businesses and agencies carry out duplicated tests is an extremely wasteful use of resources and manpower. It just makes no sense. The processor already knows who they discriminate against.
The blackbox testing happens to some extent regardless. But there is no incentive to do the testing before deployment. The shitshow we call /GDPR enforcement/ ensures that data controllers do their testing on the public. Which means people are harmed in the process of testing because it’s cheaper for the controller (who knows their chances are low of getting penalised by DPAs who are up to their necks in 10× the workload they can handle).
