Cyber Scene #81 - California Gold Rush: AI, Chips, and the Tech Arms Race

Image removed.Cyber Scene #81 -

California Gold Rush: AI, Chips, and the Tech Arms Race

 

From L.A. (Los Angeles and Los Alamos) to Peoria and D.C.to Beijing, Artificial Intelligence (AI) is ascending to incalculable heights, and at warp speed. Its reach impacts global, national, and "down home" users. This Cyber Scene will briefly discuss some of AI's ubiquitous impacts, how they change war and peace, and how regulators worldwide, both governmental and tech, strive to keep up.

The past few years have reconfirmed the indispensable use of semiconductors and chips, as well as recently "generative" ChatGPT. Chip wars between and among countries, which Cyber Scene has addressed, underscore this importance. The nuclear issue, which has recently resurfaced regarding Russia, Ukraine, NATO, and the U.S., is itself an example of AI regulatory challenges. AI has, in its unique, underpinning way, drawn together this significant and likely long-duration contribution to military and civilian organizations world-wide and merits Cyber Scene's attention. The challenge is how to regulate it.

The expansion of AI worldwide is in exponential overdrive. The 3 June Economist highlights how "waves of innovation," like desktop computers (Microsoft) and smartphones (Apple), created giants. They discuss whether AI might be the next giant for a company such as Nvidia, which makes AI chips accompanied by software developments. The article does challenge Nvidia's long-term standing, given that AI-tailored chips are now being produced by Amazon's and Alphabet's cloud-computing divisions—behemoths that have scale and funding. Governmental regulators, however, have concerns about AI's impact on "society and national security" and how to develop controls for this technology.

Cyber Scene will first look at the worldwide expansion of AI, consider the applications for users, and then address the regulatory issues facing governments and Big Tech.

Big Tech is both creatively and financially inspired. The Economist on 29 May examines why tech giants and wannabees are drawn to joining in the AI "gold rush." The article examines why tech giants and wannabees are drawn to joining in the AI "gold rush." The most recent quarterly statistics on Return on Investment for chipmakers particularly, reflecting computing power, are extremely encouraging. ChapGPT has been the talk of the town/world. AI applications appear to be booming in the marketplace, although "…the biggest question-mark hangs over the permanence of the AI boom itself; in Silicon Valley, hype can turn to disappointment on a dime." The article does note that some of the policymakers' concerns around the world focus on the worry about generative AI's impact on job loss or expanding misinformation.

The New York Times's Sarah Kessler on 10 June in "The A.I. Revolution…" cites a 2013 Oxford University study pointing to 47% of U.S. jobs becoming at risk with automation "…over a decade or two." Seemingly, concerns seem to consider this nearing a one-decade event. Goldman Sachs projected in March 2023 that AI tools such as ChatGPT and DALL-E could automate the equivalent of "300 million full-time jobs." (N.B. The U.S. population is presently at 334,233,854 as of 1 January 2023, according to the U.S. Census Bureau.)

Moving to specific technical issues at the opposite end of this AI growth impact, Foreign Affairs' Lauren Kahn on 6 June analyzes ground rules for AI as it impacts warfare. She uses as an example a possible warfare dilemma were AI and not humans to have directed the US Reaper surveillance drone that was attacked by Russia in March 2023 while flying over the Black Sea. She points out that US operators had to ditch the drone into the sea. However, she questions "…what might have happened if a US autonomous drone, enabled by AI systems, attacked a target it was only supposed to surveil." Her overriding concern is the lack of protocols to avoid this type of "warfare trigger." She specifically is concerned about how "…Washington would reassure the other party that the incident was unintentional and would not reoccur." The recent flip side of this possible incident would remind readers of the Chinese balloon. Luckily, the article points out that this might be mitigated by the fact that China still relies seriously on US technology, and its AI developers "…already face a far more stringent and limiting political, regulatory, and economic environment than do their U.S. counterparts." She adds that China, even if US AI developers were constrained by new US regulations, would unlikely be "…poised to surge ahead."

If a drone does not grab your attention, think about film director Christopher Nolan's perspective. Mr. Nolan discusses his new film Oppenheimer with Wired’s Maria Streshinsky on 20 June on the connection between AI of today and the nuclear bomb of yesteryear, still an issue today. He addresses nuclear history connected to generative AI and regulation:

"People keep saying there needs to be a governing body for this stuff. They say you all need to deal with it. Like you governments. There should be an international agency. But that's the oldest political trick in the book of the tech companies. Right?... Zuckerberg's been asking to be regulated for years… 'Cause they know that our elected officials can't possibly understand these issues. And how could they? I mean, its very specialist stuff, and it's incumbent on the creators and Oppenheimer."

Another opinion on the impact of AI regulation on the U.S. comes from Foreign Affairs team Helen Toner, Jenny Ziao, and Jeffrey Ding on 2 June. They argue that AI won't constrain the US in a technology race, despite the fact that Congress regulatory action is now involved. Regarding Cyber Scene's aforementioned "regulatory catch-up," this analysis avers that "The staggering potential of powerful AI systems, such as OpenAI's text-based ChatGPT, has alarmed legislators, who worry about how advances in this fast-moving technology might remake economic and social life." They note that a "flurry" of hearings and "behind-the-scenes negotiations" have worried Congress, while the CEO of OpenAI, Sam Altman, told the Senate that AI regulation would allow China or another country to possibly surpass US technology innovation.

The Government Accountability Office (GAO) has also been involved thanks to two senators, Gary Peters (D-MI) and Ed Markey (D-MA), per The Hill on 23 June. They too are concerned about generative AI tool risks and how to mitigate this. They ask for a "detailed technology assessment" and have asked the GAO, knowing that it is a nonpartisan government agency. They see this as a needed follow-on to the May congressional hearings, including the Senate Judiciary subcommittee, with the CEO of OpenAI. This article references a framework as well for AI regulation, publicized this same week from Senate Majority Leader Chuck Schumer (D-NY), calling on government to move forward on regulation using "…five key pillars: security, accountability, protecting foundations, explainability, and innovations."

As a reminder of how difficult it is to regulate, the flip side of Senate oversight just noted on AI comes from Senator Alex Padilla (D-CA). Wired's Paresh Dave on 31 May recounts that Senator Padilla advocates expanding AI's ChatGPT and states that it is short-shrifting non-English languages. He explains that AI chatbots are not as fluent in non-English languages which in turn could threaten "…to amplify existing bias in global commerce and innovation." About one in five do not speak English at home, according to Wired. At a Congressional hearing in May 2023, Senator Padilla challenged the CEO of OpenAI, about the lack of ChatGPT support. California's language gap is huge: 44% of Californians speak a non-English language. The Senator is skeptical of the current effort, adding that these new technologies should provide education and greater communication, rather than language problems creating "…barriers to these benefits." Skyler Wang, a UC Berkeley sociologist of technology and AI, goes even further: "We want to think about our relationship with Big Tech as collaborative rather than adversarial."

AI is not the only concern with respect to Big Tech. The New York Times' (The Times) Sapna Maheshwari on 7 June discusses how TikTok's recent congressional testimony in March 2023 and an earlier one in October 2021 may have misled Congress. As you likely remember, TikTok is owned by ByteDance, in turn closely aligned to China. The lawmakers drew their insight from reports from Forbes and the Times. A joint letter from Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) underscored the possibility that US data may be stored in China and TikTok employees may have access to such data. Forbes had reported in May that TikTok stored financial information of creators, among them Social Security numbers and tax IDs, on servers in China. They were accessible to TikTok employees in China. In addition, "The Times reported earlier in the month that American user data, including driver's licenses and potentially illegal content such as child sexual abuse materials, was shared at TikTok and ByteDance through an internal messaging and collaboration tool called Lark."

Twitter is also under Congressional scrutiny. The Hill's Ines Kagubare on 8 June adds that lawmakers are increasingly worried about Twitter's data security. She cited a letter to Twitter a week earlier from a group of Democratic senators about concerns about the consumer privacy and data security issues given the resignations of two of Twitter's security leaders. In 2022, Twitter paid $150 million to close a privacy lawsuit the US Federal Trade Commission (FTC) and Justice Department (DOJ) brought about. The Senators found this additionally worrisome and have imposed a 14-day notification when Twitter has "a change in structure such as sales, including change of ownership, and mergers." The latter details are likely due to concerns about slight-of-hand suspicions. 

Submitted by Anonymous on