The European Union has taken a big step in regulating the booming field of Artificial Intelligence (AI) by passing a new law that demands more openness from companies. This “AI Act” forces them to reveal details about the data used to train their AI systems, a secret many have guarded closely.
The rollout will happen in stages over two years, giving regulators time to implement the rules and businesses time to adjust. Specific enforcement details are still being worked out, with a standardized disclosure template expected by early 2025.
This move comes as public interest and investment in AI skyrockets, particularly in “generative AI” like OpenAI’s ChatGPT. These AI systems can quickly create text, images, and sound, grabbing attention but also raising concerns. One worry is that AI trained on copyrighted material like books or movies might be infringing on creators’ rights.
A particularly debated aspect of the AI Act is the requirement for companies using general-purpose AI models (like ChatGPT) to provide detailed summaries of the data used to train them. This aims to address ethical and legal questions surrounding AI development. However, AI companies are pushing back, fearing such disclosures could reveal trade secrets and hurt their competitive edge.