Cyber Scene - AI Techno-polarity: A Chilling View from Above

By krahal

Artificial Intelligence (AI) has catapulted into our cyber life with little advance notice. Globally, brilliant tech firms have seized the opportunities AI is offering in creating applications faster than one can spell "AI." This advancement is beyond Prometheus' fire: he was "regulated," but AI is wide open—an arrival to 24 time zones as cyber experts seize the nanosecond to find new uses, leading to even newer uses.      

What are the behemoth impacts AI does or may deliver?      

Among the many who attempt to keep up with AI's transformation, Foreign Affairs' Ian Bremmer, President and Founder of Eurasia Group and GZERO Media, and Mustafa Suleyman, CEO and Co-Founder of Inflection AI and DeepMind, take an overarching AI perspective that appears to be stunning. They capture a sliver of the future in "The AI Power Paradox: Can States Learn to Govern Artificial Intelligence--Before IT's Too Late?"      

Bremmer and Suleyman focus on the strength, uniqueness, and breadth of the spectrum of technology leading these AI firms to be capable of taking control over nation states. They believe that "…unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world's primary geopolitical actors." They analyze what has been happening, noting that the world is less predictable and more fragile than in the past. They explain that over the last 10 years or so, Big Tech has expanded its independence to the extent that they have become "sovereign actors" in the digital realms they command. The Foreign Affairs authors continue, noting that AI has not only accelerated this trend, but "…extends it far beyond the digital world. The technology's complexity and the speed of its advancement will make it almost impossible for governments to make relevant rules at a reasonable pace. If governments do not catch up soon, it is possible they never will."      

The speed at which AI is beaming down around the world is "faster, higher, and stronger" as expressed by the Foreign Affairs study. Moreover, it is problematic in that AI is both a moving target as well as an evolving weapon, according to Bremmer and Suleyman. They examine what governments are trying to do, noting that they are starting well behind the curve. They cite EU's AI Act as the most ambitious so far but note that it is only effective in 2026. How many "AI years" might that be? The UK also has a plan in place, but it is voluntary. Nothing has been done on the global level.      

With this backdrop, they point out that "…regardless of what policymakers in Brussels or Washington do, technologists, not policymakers or bureaucrats, will exercise authority over a force that could profoundly alter both the power of nation-states and how they relate to each other." The authors also note that as the US and China continue to worry about the other gaining an edge in AI; this overrides the countries' worry about the technical risk to society, "…or to their own domestic political authority." They see this as a zero-sum game.      

They believe that whatever the political chatter about regulating AI, "…there is still a lack of political will to do so." Bremmer and Suleyman conclude by saying that "…this century will throw up few challenges as daunting, or opportunities as promising as those presented by AI…Now, they (policymakers) must build a new governance architecture to contain and harness the most formidable, and potentially defining, force of this era….2035 is just around the corner."      

For those who prefer podcasts, Bremmer and Suleyman offer a 7 September podcast that informs on technological discoveries they anticipate, and project the direction of AI. They identify the next few years as "staggering…and really, really dangerous…as AI is unleashed to 8 billion people…and is anti-Westphalian." Remaining in the broader impact of AI, the Wall Street Journal's James Rundle reports that former US Cyber Director Chris Inglis has joined Hakluyt, a UK corporate firm that advises boards and senior executives on areas such as geopolitical tensions and corporate actions including mergers and acquisitions. This reflects the Foreign Affairs global-focused thesis discussed above. Inglis notes, relatedly to AI, that "The interesting thing about ChatGPT isn't what it is at the moment, but the speed at which it's come at us, and perhaps, what might come at us in the next few weeks, not years."      

As you have seen in the past Capitol Hill hearings, some US political leaders, such as Senator Schumer (D-NY) cited last month in Cyber Scene, are trying to make the Hill tech savvy, but timing is everything, and the Hill works slowly at best. However, the Washington Post reports that Senators Martin Heinrich (D-NM) and Mike Rounds (R-SD) are leading a bipartisan group as Congress races to address the risks and benefits of AI. The Senate AI Caucus co-chairs joined the Post's Leigh Ann Caldwell to preview an upcoming congressional forum on AI policy and regulation for the rapidly evolving technology. When asked about regulation, Sen. Heinrich said it was important to find gaps and close them. Underscoring this is the importance of bipartisanship and consensus to move forward. Sen. Heinrich hopes to move to legislation in 2024. Both Senators have more to address in videos connected to the article above. Senators Rob Portman (R-OH) and Martin Heinrich have also worked together on an AI subcommittee.      

As the oversight and regulation of AI increase, Wired's Vittoria Elliott appeals to governmental and tech firm help to deal with AI-driven disinformation: "Too bad the government and Big Tech aren't doing enough to head off chaos." The article discusses several "true or false" issues (well, the ones presented were false), but concludes saying that "…we not only are going to be fooled by fake things; we're not going to believe real things."      

It could be even worse. The Economist's cover page on "AI Voted" takes it a step further regarding 2024 elections by 4 billion people including the US, UK, India, Indonesia, Mexico, and Taiwan. It highlights that in the past any disinformation had been created by human beings, whereas with generative AI, it is expected to be put to work in a supercharged manner, "…with models that can spit out sophisticated essays and create realistic images from text prompts." ChapGPT's OpenAI has promised to monitor usage to identify political-influence operations. The discussion points out that Big-Tech has been criticized for both allowing disinformation in the1996 US election or in 2020 for removing too much.      

Another serious issue is the US-China AI problem addressed by the New York Times' (The Times) Julian Barnes and former Times Beijing bureau chief Edward Wong. They open by citing that "Both countries are racing to develop their artificial intelligence technology, which they believe is critical to maintaining a military and economic edge and will give their spy agencies new capabilities." This has led to irritating rifts spilling over to high level diplomatic problems. Most recently, Barnes and Wong write that break-ins have occurred to "…a dozen penetrations by Chinese citizens of military bases on American soil in the last 12 months." This followed the Chinese balloon problem which impacted both Secretary of State Antony Blinken's visit to China and reportedly angered Chinese President Xi. It appears his People's Military Army neglected to tell XI, who chairs the senior military board, about the balloon.      

Barnes and Wong also add that there has been increasing aggression from China, which is sending "…a huge deployment of collectors" to the US. The US believes that despite the huge number of Chinese abroad for nefarious reasons, the US lead in AI is expected to offset China's population dominance. In tit-for-tat protocol, US officials say that "Chinese officials hope the technology will help them counter American military power, including by pinpointing U.S. submarines and establishing domination of space, U.S. officials say." What worries US officials more is China's reported plans to place agents or recruit assets in tech companies, defense industry and the US government. Moreover, China's AI software can recognize faces and identify an American spy by his/her gait. 

To see previous articles, please visit the Cyber Scene Archive.

Submitted by grigby1 CPVI on