Artificial Intelligence (AI) is widely used in contemporary profiling, ranking, analysis for marketing purposes: predictive marketing / personalization; chatbots; dynamic pricing / demand forecasting; content creation; language optimization. Employment of certain AI methods has become so ubiquitous that finding intriguing topics around AI becomes a challenging task.
Nevertheless, when it comes to setting up the right paradigm for AI legal regulation, EU institutions are hanging in a limbo and struggle to navigate into more visible waters. Key reasons behind such decision-making spasm – ethical, moral, and human rights issues.
During the last two weeks AI enthusiasts were jumping around a hot topic – Google AI engineer claimed that Google AI chatbot generator LaMDA showed signs of sentience and has come to life – it has allegedly expressed a form of fear of death, loneliness, and a sense of humour. While the claim was widely disputed by both Google and independent experts, it has raised a worth-while debate on AI ethics and our understanding of consciousness, in particular, what does it mean to be sentient?
At this point, there is no widely accepted rule on what amounts to and what does constitute a conscious being, other than our own generic understanding of what is a person – someone capable of feelings, thoughts, opinions and reasonings. Fundamentally, this is what makes humans human. Even if we might not be there yet with general intelligence, governance of AI-driven decisions impacting the lives of human beings or otherwise creating significant impact on the humans is far from trivial.
In the atmosphere of visible trend in ubiquitous AI application, EU institutions are struggling to strike a right balance between the momentum of crafting a world regulatory cradle for development of AI technologies on one hand, and protection of fundamental human rights (Charter of Fundamental Rights, GDPR) on the other.
In April 2021 the European Commission has unveiled a new proposal for an EU regulatory framework on artificial intelligence (AIA). With the adoption of the AIA, EU framework of five mega digital acts (DGA, DMA, DSA, GDPR and AIA) will be completed.
However, proposal for AIA fails to capture and refine the core relationship between the AIA and fundamental principles of data protection. AIA fails to address the user / client / end-user rights, such as the right to challenge an algorithmic automated decision and require human oversight / review (Article 22 (3).
Valuable opportunity was missed and waisted. Thus, this ubiquitous existence of AI is constantly overshadowed by the sense that GDPR is moving at a different pace and at a different direction.
Austrian court has on 16 March 2022 referred a request to ECJ for a preliminary ruling (Case C-203/22), which will either deepen the schism between AI and GDPR or will become a corner stone for mutual synchronization. The questions raised by the Austrian court primarily concern data subject rights guaranteed by virtue of Article 22(3) of the GDPR to express one’s point of view and to challenge an automated decision within the meaning of Article 22 of the GDPR. One of the questions raised is whether the user of AI method is required to disclose to the individual information on the specific processing concerning him or her, inter alia:
- potentially pseudo-anonymised information on the way individual’s data is processed,
- input data used for profiling,
- parameters and input variables used for rating,
- influence of these parameters and input variables on the calculated rating,
- information on the origin of the parameters or input variables,
- an explanation as to why the individual was assigned a specific rating and clarification behind it,
- listing the profile categories and an explanation as to what rating implication is associated with each of the profile categories.