Standards for the safe and ethical use of Artificial Intelligence
- Javed Sajad

- Jul 12
- 1 min read

📢 The rapid evolution of AI standards, particularly with the recent OECD.AI catalogue and the National Institute of Standards and Technology (NIST) standards, can understandably cause anxiety for organizations crafting their own AI strategy and policy. My recent consultations with companies and Governments across the USA, UK, EU and the Caribbean reveal a strong desire for robust internal frameworks for safe AI use. However, the perceived fragmentation of standards creates significant frustration. Moreover, the NIST "Plan for Global Engagement on AI Standards" (July 2024) seeks to promote global alignment on AI standards approaches which has the potential to minimize trade barriers by facilitating compatible practices. This is the nexus upon which I advise companies and governments to premise their infrastructure.
💡 As a multijurisdictional expert, I bridge this gap by leveraging common best practices and applying them within specific legal and regulatory landscapes. For instance, NIST's excellent AI standards catalogue serves as a strong foundation for ethical and secure AI policies in the US. The new global engagement plan indicates a potential harmonization effort, but fear not! By adopting a multijurisdictional approach tailored to each company's needs, we can craft compatible policies and strategies.
🛠 This mirrors my experience in crafting cybersecurity regulation. While the recent UN Cybercrime Treaty is a significant development, the Council of Europe's Budapest Convention (established much earlier) Cybercrime Programme Office of the Council of Europe (C-PROC) provided the infrastructure for drafting policies and laws that run parallel to UN initiatives. Just as in cybersecurity, companies and countries wrestling with AI face similar anxieties due to the fragmentation of standards.



Comments