AI's March to the Sea - A Bill of Rights, But For Who?
The AI Bill of Rights is a start, a noble one. But it's the first step in a thousand-mile journey. And that journey doesn't need a roadmap—it needs a backbone.
Hey there, it’s Hans
We stand at the inflection point of technological and moral evolution—a moment where our compass is spinning wildly, seeking true north amidst the magnetic forces of innovation, power, and the unrelenting march of artificial intelligence (AI).
As I pour another cup of coffee this morning, the mist over the Amstel isn’t the only thing that’s dense and obscure. The White House Office of Science and Technology Policy (OSTP) has unfurled its AI Bill of Rights. While the intention behind it is clear, the execution is as murky as the business model of WeWork.
The proposition is as simple and as complex as love and marriage in the age of Tinder. The AI Bill of Rights outlines principles that aim to protect us, mere mortals, from the potentially overwhelming might of AI.
It’s a modern-day David and Goliath, but with algorithms instead of stones, and us without a slingshot.
The Rights of a Society in Pixels and Code
Let’s unpack this. The document is a non-binding whitepaper—yes, non-binding. That’s like telling your Bumble date you want a meaningful relationship and having a profile on Tinder. Commitment issues, anyone?
The framework delineates five principles aimed at safeguarding the American public:
1. Safe and Effective Systems
2. Algorithmic Discrimination Protections
3. Data Privacy
4. Notice and Explanation
5. Human Alternatives, Consideration, and Fallback
At first glance, this reads like the Ten Commandments of AI. Thou shalt not bear false witness against thy neighbor’s data. Thou shalt honor thy user’s privacy. It's the stuff of a cybernetic sermon that would have data scientists and ethicists shouting "Hallelujah!"—if they were into that sort of thing.
Where's the Beef?
But here’s where I cock an eyebrow. Where are the teeth?
This document, rich with aspiration, is as enforceable as the "Do Not Call" registry. We live in a world where the big players—Google, Amazon, Facebook, and Apple—play Whac-A-Mole with regulations while sucking up more data than a Dyson on steroids. They're the juggernauts with the armies of lobbyists ensuring that non-binding stays just that.
The OSTP’s AI Bill of Rights is a quintessential example of government playing catch-up with technology that's sprinting like Usain Bolt on a bender. It's reactive, not proactive. The call for transparency and accountability in AI systems is as critical as it is obvious. But the question remains: Who enforces the rules when the foxes are not only guarding the henhouse but also building the fences?
The Algorithmic Overlords
We've been kneeling at the altar of technology for so long. We worship at the feet of these algorithmic overlords, with their promises of ease, efficiency, and the holy grail of personalization. But let’s not forget, that these systems are not benign digital gods. They are coded by humans. You know, the same species known for the Spanish Inquisition and "Jersey Shore."
When AI systems decide who gets a loan or who gets parole, we need more than principles—we need protections. Protections with the weight of law, the kind that can levy fines that aren't just the cost of doing business for companies with GDPs larger than some countries.
So What Do We Do?
We need enforceable regulations, not just guidelines. We need the kind of oversight that doesn't just wag a finger but slaps the wrist and cuffs the hands. We need to invest in education so that not only the elite understand the nuances of these systems, and we need to democratize access to the tools to manage and mitigate AI's reach.
We need a Marshall Plan for digital literacy and a New Deal for data governance. We must promote competition to ensure no AI system is too big to fail—or to control. Free market and no monopolies.
In essence, we need to do more than draft a Bill of Rights; we need to build a culture of responsibility, a framework of accountability, and, yes, a system of penalties that makes compliance non-negotiable.
This is not anti-market. This is how a market should function. So don’t get me wrong, we shouldn’t regulate the way the EU did with cloud computing and killing innovation and growth in the process. We need the market to play its game and a referee who understands the bigger picture and is not afraid to give out red cards.
The AI Bill of Rights is a start, a noble one. But it's the first step in a thousand-mile journey. And that journey doesn't need a roadmap—it needs a backbone.
Power to the Poets
Hans van Dam
Love your article Hans, but have wondered if this challenge is not a digital equivalent of the conflict between the Right to Bear Arms and the Right for Safety and Security.
A tool can be used for ill or good; regulating those with good intent, may not have any influence or bearing on those who have ill intent.
What good a slingshot then?