This article analyses the tensions between the GDPR and the AI Act in the context of AI development, focusing on the technical realities of centralised training. It evaluates Federated Learning (FL) as a decentralised, privacy-preserving alternative that advances compliance with data protection principles, unlocks siloed data, and exemplifies “data protection by design”. Yet at the same time, FL complicates compliance with the AI Act’s obligations for high-risk AI systems, in particular data governance, bias mitigation, and robustness, as these were drafted with centralised training in mind. Further analysis details the energy–privacy trade-off inherent in FL. It concludes that while FL provides a credible pathway to trustworthy, human-centric AI development, its distinctive features demand further technical research and either a flexible interpretation of the AI Act’s essential requirements or a dedicated regulatory framework.