Cyber Scene - AI: Driving US Crazy?
By krahal
The relatively new kid on the block is generating tectonic explosions worldwide, from international political issues to how you write a note to a friend or prepare your lawsuit briefs.
In fact, there seems to be only one recent element that, despite conviviality with cyber issues, is not driving. The Washington Post (The Post) 21 June reported on the hack of CDK Global's software leading to car dealerships nationwide being forced to revert to pen and paper. The rest of the world seems to be overflowing with AI as it evolves and expands exponentially. This in turn has generated not only new applications, but also the need for future-focused tech experts, at home and abroad.
One great AI step for mankind also engages relatively smaller steps ranging from one’s personal trust in AI as well as regulatory challenge for countries and their tech leaders. We will examine the international impact, and then consider some complex decisions facing the US and its allies.
New York Times' (The Times) Brian X. Chen on 23 June looks to the question of whether Apple, Microsoft and Google should have more access to US data. He cites the promotion of new phones and worries about trusting this development because these new personal phones and computers are powered by AI. Chen asks, "In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across apps you use. And an Android phone can listen to a call in real time to alert you to a scam. More of our personal data may have to leave our phones to be dealt with elsewhere. Getting Cloudy!!!" (He does not mean "weather"). As the sources of phones and computers expand, far from North America, the issues bring an even more complicated problem. "Can our, or any government, ban a foreign-owned social media platform," Chen asks?
The Economist on 13 June calls the US move against China's Huawei an "assassination," and sees it backfiring. The article goes on with a comparison of worldwide tech companies indicating that Huawei is doing quite well. In the first-year quarter, revenues are up 564%. (Check out the charts the Economist provides.) Although the Chinese may not understand the Economist's "Lazarus" reference--perhaps not applicable in any event as Huawei never actually died--they would certainly understand their success.
The EU is in a different situation. Bloomberg on 21 June looks at Apple's decision to not roll out Apple Intelligence in the EU market due to European regulations, as reported by Samuel Stolton and Mark Gurman. This impacts a "raft of new technologies" not getting to hundreds of million potential users in EU countries. As of 21 June, Apple would also block iPhone Mirroring and SharePlay Screen Sharing this year, in compliance with the EU's Digital Markets Act.
On the home front, two engaged departments—the Department of Justice (DOJ) and the Federal Trade Commission (FTC)—are working together, arriving at a way to each take its individual responsibilities for antitrust issues principally with Nvidia, Microsoft and OpenAI. According to the Times' David McCabe on 5 June, Nvidia, which is rising to the top among its tech challengers due in part by its AI chips, is being considered in violation of antitrust laws. The DOJ will take the lead with Nvidia. The FTC will take OpenAI—connected to ChatGPT chatbot, and Microsoft which is connected to OpenAI via its $13bn investment in OpenAI and other AI companies. The FTC chair had earlier explained that regarding AI, the objective was to spot "potential problems at the inception rather than years and years and years later, when problems are deeply baked in and much more difficult to rectify." The report goes on to offer additional information about the three mega-companies as well as discussion of FTC and DOJ connections with Stanford tech experts and US states trying to deal with these issues. Nvidia particularly can afford legal fees: according to the Times, it reached $3 trillion a month ago.
This movement underscores the Biden administration's intent to exert more control over the power of the largest tech companies. Following the prior administration's 2019 oversight of Google, Apple, Amazon, and Meta, all four have been sued for violation of antimonopoly regulation. The US is cast as being behind regulation in Europe against AI. However, in May 2024, US senators called for $32bn for AI annual spending to "…Propel American leadership of the technology but holding off on asking for specific new regulations."
There is much more to the impact of AI. The Economist on 20 June conveys a huge thanks to AI and other Microsoft and Amazon Web Services for a new US and UK military future. The query, "Is there a better way to wage war?" was suggested. The tech firms collaborated with arms makers as well as military contractors. In only 12 weeks, (and this was 2 years ago) the experiment, named StormCloud, delivered data and generally "impressive" results. The AI-enabled weaponry system was well received, and the budget from 2 years ago nearly doubled. Although it was not involving "deep learning" of AI, it was "trickling into every aspect of war."
The article goes on in considerable detail to review "PrAIse and complAInts (sic)" in the UK. Kenneth Payne of King's College thinks "…that AI will transform not just the conduct of war, but its essential nature." Still, although progress has developed, political issues, red tape and funding move slower than desired by the UK and US.
As AI and other cyber connections expand, new skills are needed. As reported by Wired's Sahanth Subramanian, Eliot Higgins and his team of 28,000 "forensic cyber foot soldiers at Bellingcat" (his company) may be an answer. She interviews Higgins regarding his "…army of digital sleuths in the age of AI." His company, a non-governmental organization (NGO) based in the Netherlands but directed from the UK, focuses on open source in critical, atrocity-associated disasters. With the world's largest open-source company and a turbulent world at that, there is much to cover. Regarding AI, Higgins says that he has already seen AI-generated content "…being used as an excuse to ignore real content…AI can generate anything now: video, audio, the entire war zone re-created." He hopes to be able to contribute capturing war crime evidence for legal accountability. (N.B. The Hague, the Netherlands, is the site of the International Criminal Court (ICC).)
In the flurry of AI's rising and playing catchup on developments moving at the speed of light, another side of protecting hallowed infrastructure to support home sweet home and its people is its focus on the future.
As an aside and a reminder of the flip side of cyber success, the Economist on 13 June story aptly subtitled "Ghosts in the machines," explores the notion of taboos against sabotage and red lines, looking at one decade of cyberattacks of various natures and across this timeline against the US and UK. Several highly informed individuals commented on the issue of "honourable espionage work" including NSA and CIA chief Michael Hayden, NSA chiefs Keith Alexander and Paul Nakasone, senior advisor Rob Joyce, and present Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly. Easterly sketched out a "massive scale: Imagine not one pipeline, but many pipelines disrupted. Telecommunications going down so people can't use their cell phone. People start getting sick from polluted water, trains get derailed…." Grim indeed. In conclusion, John Hulquist from Mandiant (part of Google) states: "The point is that both good and bad sabotage may require peacetime intrusions," but when an actual war arrives, it's too late: cyberwar needs to be fought in advance.
For those looking for closure regarding the Supreme Court rulings related to online issues, the Times' David McCabe on10 June "Free Speech Online" is here to help. If you have paid the least attention to the Supreme Court 9 you will be aware of divisions among them. Since your life may be impacted by upcoming legal rulings, McCabe will give you a run-down as follows. The questions are McCabe's and the answers (in italics) are taken from his responses.
- When can social media sites be sued over what users post? Rarely.
- Can government officials block constituents on social media? Sometimes.
- Can the government force social media sites to host political content? No, except for FL and TX; stay tuned.
- When can the United States push social media sites to remove content? Very complicated.
- Can the government restrict access to online pornography? Maybe—stay tuned.
- Can the government ban a foreign-owned social media platform? For the time being but subject to reversions.
To see previous articles, please visit the Cyber Scene Archive.