The release of the proposal AI regulation
by the European Commission has been “welcomed” by a general predictable “thank you guys, but” about its effectiveness for guaranteeing civil rights. Meanwhile, on the ground, each European state deals with the “social externalities” of AI (and thus personal data gathering) in its own way: sometimes denying them (like in the Dutch strategy to fight Covid with Palantir
) and sometimes with a posteriori
law acts (see Italy blocking real-time face recognition below). When, as in the new proposal, the obligations entail self-evaluation from the side of “high risk” AI systems providers, we have a playground for arbitrary box-ticking exercises. Moreover, compliance requires resources to be carried out: it is as if the competition was moved from service to legal capacity and turnover, with a competitive advantage for big players.
The elephant in the room is public procurement. How to equip and regulate public administrations so that they can select providers responsibly and transparently? Which values and power dynamics are reflected in the legacy infrastructure that these services are plugged into? Regulating AI towards equality is also a matter of embodying a sensitivity for the commons into PA digital literacy and technical enablers.