top of page

Artificial Intelligence - What Abu Dhabi's AI Minister Gets That Glenn Beck Doesn't

Artificial Intelligence - What Abu Dhabi's AI Minister Gets That Glenn Beck Doesn't

“We overregulated a technology, which was the printing press,” said Al Olama. “It was adopted everywhere on Earth. The Middle East banned it for 200 years.

“The calligraphers came to the sultan and said: 'We’re going to lose our jobs, do something to protect us'—so, job loss protection, very similar to AI,” the UAE minister explained. “The religious scholars said people are going to print fake versions of the Quran and corrupt society—misinformation, second reason.”

Omar Sultan Al Olama is Minister of State for Artificial Intelligence in the United Arab Emirates

Al Olama compared the regulation of AI to the Ottoman Empire banning the printing press. We have seen similar things. When Genentech was founded to make insulin from bacteria for diabetics, it had a rival that was cloistered into a remote lab making no progress due to the fear mongering of political interests. Genentech got their first now, millions of lives depend on their genetically modified bacteria. Fear mongering on "AI" is coming from all directions. Elon Musk has likened it to summoning a demon, hasn't stopped him from putting it in his Tesla's. Glenn Beck, notorious for being a leftist darling after trying to get Hillary Clinton elected over Trump in 2016, has called for an immediate stop of all AI research. Beck wasn't specific about what kind of AI research or if it would help him get a democrat into office over Trump in 2024. The motives for wanting to regulate AI I find to be either misguided or malicious.

Skynet isn't going to happen because a chatbot can write a Haiku, or answer a math problem. However, there is an existential threat greater to humanity than nuclear-powered humanoid robots chasing people through time machines, that is the administrative state attempting to prevent humanity from accessing digital resources and cloistering it for a select few and powerful. You will never stop AI research from continuing, but you can throttle its progress and limit access to a select few. This does a few things. It stifles meaningful and disruptive technology from improving people's quality of life. Today anyone can download an LLM model and run it against a dataset. Try it on People are currently working on different AI models for different reasons. Imagine being interested in AI beyond current transformers but not being able to access any information on it because it's overregulated. Potential AI models will never be built or conceived because someone is making a political agenda over a phobia around "if" statements. Start-ups in biotech research with AI? Only if you have the money and the political muscle to navigate the administrative state barriers. That's sad because that will lead risk-averse ventures and stifle meaningful advancements.

The regulating AI helps some very powerful and rich interests. It one helps the expansion and powerbase of unelected bureaucrats. If they don't like you, they can just create a regulation and fine you. It also provides the regulatory capture necessary for big business not to be disrupted by start-ups. Don't worry about competition eating you lunch, they can't get through the red tape. So, don't be surprised when CEOs and fund managers are eager to point out how scary AI is and needs to be regulated. Worse yet, you can slow AI in America, maybe the EU, but Chinese interests will go unabated. This will give the genocidal regime in history a leg up on everyone else. Calls for regulating AI because they can kill become a self-fulfilling prophecy when national security threats have better systems because they don't have the red tape.

So, if you want to kill any meaningful technology developments that could improve people's quality of life, help powerful political and financial interests, and then go on to create massive national security threats, by all means, listen to Glen Beck. If AI does have dangers associated with it (everything does) then you need more eyes on it, not less. The best way to do that isn't regulation, it's propagation. Teach ten-year-olds to set up their own transformers with their own data sets. Cultivate as many AI experts in the field as possible. Make it commonplace, that's how people can see what really is a danger and what is fear mongering. Keeping it in the hands of a selected few puts us all in danger, from something far worse.

16 views0 comments


Post: Blog2_Post
bottom of page