The Data Protection Commission (DPC) of Ireland has officially begun an investigation into X, the social media platform founded by Elon Musk, concerning the processing of posts from European users for the purpose of training its artificial intelligence model known as Grok. This inquiry is significant as it touches on the intersection of technology and privacy laws, particularly the General Data Protection Regulation (GDPR) that governs data handling within the European Union.

The DPC's investigation aims to critically assess whether X has been compliant with crucial provisions of GDPR, with a particular focus on the lawfulness and transparency of its data processing activities. Such scrutiny is essential as it highlights the ongoing challenges posed by fast-evolving technologies and the legal frameworks intended to regulate them. The DPC has a pivotal role as Ireland is home to the European headquarters of many major tech companies, making it a key player in enforcing EU data protection laws.

The European Union's AI Act, a set of regulations that officially took effect last year, has raised concerns among tech executives about the implications for their operations. This legislation mandates various rules and requirements aimed at ensuring ethical and responsible use of AI technologies. In a previous move, the DPC had already compelled X to cease its data harvesting practices from European users intended for Grok's development, drawing attention to the importance of user consent and data privacy.

Following the DPC's initial notification, X took the precautionary step of temporarily suspending the use of data for training Grok, indicating a willingness to comply with regulatory expectations, although the broader implications for its AI operations remain to be seen.

Grok is an artificial intelligence model developed by xAI, a company founded by Musk, which includes a range of Large Language Models (LLMs). These models are designed to power a generative AI querying tool that is available on the X platform, demonstrating the increasing reliance on sophisticated AI technologies in everyday applications. The training of these LLMs is contingent upon access to vast and varied data sets, which raises questions about the ethical use of publicly accessible information.

The ongoing investigation will specifically examine a subset of data controlled by XIUC, focusing on personal data derived from publicly accessible posts made by users within the EU and European Economic Area (EEA) on the X platform. The DPC's objective is to ascertain the legality of processing this personal data for the training of Grok's LLMs, a process that must adhere to strict data protection rules.

This inquiry was officially sanctioned under Section 110 of the Data Protection Act 2018, with the decision being communicated to XIUC by Data Protection Commissioners Dr. Des Hogan and Dale Sunderland in April 2025. The timing of this decision underscores the DPC's proactive stance in addressing data protection issues in the rapidly advancing field of AI.

Adding to the global scrutiny faced by X, Canadas privacy watchdog also launched a separate investigation back in February 2025. The Office of the Privacy Commissioner of Canada is investigating whether X violated privacy regulations by using personal data from Canadians to train its AI models. This probe is grounded in the Personal Information Protection and Electronic Documents Act (PIPEDA) and seeks to determine compliance with federal privacy laws concerning the collection, use, and disclosure of Canadians data for AI model training.

The convergence of these inquiries highlights a growing vigilance among regulators worldwide regarding the ethical implications of AI and the responsibilities of companies like X in managing personal information. As the digital landscape continues to evolve, the outcomes of these investigations could significantly impact how technology companies operate and interact with user data.