Cyber Scene - Election Fever: Iran and AI Rising

By krahal

The backdrop flurry of the US presidential election on 5 November comes with increasing cyber heat, for worse, and with counter measures, somewhat better.

Even as Iran is very much engaged in the Israeli-Gaza war, which has spilled over from Middle East to the Suez Canal and the US, it has also engaged in hacking the US Presidential Election. This cyberattack occurs even as the US attempts to find a peace accord with Iran.

It seems, however, that Iran is attacking bilaterally: neither presidential party has been "neglected." The Washington Post's (Post) Joseph Menn August 9 reported that Microsoft revealed Iran was posting fake news blasting first, on the right, the Republican presidential candidate and then, on the left, the Democratic party. Iran had been engaged in the 2020 election as well, so the US intelligence agencies told reporters that they were first seeing attacks on the Republican presidential candidate.

Despite Microsoft's mention of Russian activity, Menn focused on Iran and reported: "Iran has stepped up its efforts to interfere in the November election and ramp up American polarization, including through hacking attempts and fake news sites aimed at the far left and far right." Iran started by publishing from their own sources with another group directed by Iran's Revolutionary Guard, and succeeded in hacking individuals who were engaged in the US campaign. This may have opened the door to more attacks. Attacks did not seem to apply at that timeline to the new Democratic presidential candidate, selected only on 21 July, and Microsoft explained to the press that it would not be endorsing any presidential candidate, despite the X boss choosing to do so.

What comes next is an FBI probe regarding Iran hack attempts against the Democratic and Republican presidential campaigns. Post reporters Devlin Barrett, Josh Dawsey, Tyler Pager, Isaac Arnsdorf and Shane Harris on 12 August explain that the FBI was tracking Iranian targeting of both Trump and Biden-Harris associates/advisers/campaigns. Moreover, the FBI added that on 12 August, it openly acknowledged "…a high-stakes national security investigation months before Election Day." FBI contacted Google, as well as other companies, to determine if phishing had been the intruder and to what individuals it may have reached. At least three staffers working for Biden-Harris (pre-21 July) had spear phishing emails with the appearance of being legitimate. A sensitive investigation followed to determine the magnitude of the attack.

The Hill's Julia Shapiro on 16 August announced that Open AI was said to have disrupted an Iranian influence operation. The Iranian influence operation used ChatGPT to attempts to "…generate content related to the US presidential election and other topics." Those accounts are now banned by OpenAI. The generated content was not likely to have made any meaningful impact, as social media received few or no likes, shares or comments. Storm-2035 used ChatGPT to create negativity regarding candidates of both US presidential election parties to be then also shared onto social media.

The Post's Kevin Schaul, Pranshu Verma and Cat Zakrzewski explained on 15 August that it is most challenging to determine truth from deepfake using new AI technology. They explained that AI-created content was flooding the web and making "reality" difficult to recognize. This particular discussion is extremely useful to the vision readers, as the prose is accompanied by photo examples of deepfake detention tools that "…struggled to tell what was real." Sadly, the synopsis is as follows:

Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or "deepfakes." Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil. See why AI detection tools can fail to catch election deepfake.

Thirty tech companies tried to introduce AI to federal, state and local political campaigns, unsuccessfully, according to the New York Tiimes' Sheera Frenkel. Frankel interviewed 23 companies and 7 campaigns for her research to conclude that "The Year of the AI Election…Wasn't." It was not, however, entirely due to deepfake interference.

One of Frenkel's unhappy approaches was voice technology that could "…make tens of thousands of personalized phone calls to voters." The individual who pursued this discovered that "…the only thing voters hated more than a robocall was an AI-backed one." Some were nervous about AI, some disliked phone calls, and some distrusted particularly AI's generation of photos or videos of candidates. Those who read the prior Post citation will appreciate the concern of doctored photos or videos. In Frenkel's report she also notes that the famous Taylor Swift photo connected by a presidential candidate who posted it, to live viewers assuming endorsement, caused quite a political "withdrawal" response across the US.

As an additional assessment connected to the November election, the Post's Glenn Kessler executed a "Politics FACT CHECKER: The Truth Behind The Rhetoric" putting the kibosh on whimsical "statements" by many luminaries. He reviews 20 such statements made for a wide publicity. The reviews range from 11 July 2024 to 22 August 2024 and are marked by Pinocchios (for the not-so-truthful).

This leads to David McCabe's report from the Times on 20 August on legal liability for user posts. He cites Facebook, X, YouTube and others that "insulate themselves" from liability thanks to a 1996 law. Well, much has happened since then. McCabe notes that "…protection from this law, Section 230 of the Communications Decency Act, is so significant that it has allowed tech companies to flourish."

Now, however, a lawsuit was filed in May against Meta (owner of Facebook, Instagram, and WhatsApp) asking the federal court "…to declare that a little-used part of Section 230 makes it permissible for him to release his own software that lets users automatically unfollow everyone on Facebook." Yes, it is a bold interpretation, but while awaiting the judicial result, it also may be applied to "…rein in the power of those social media giants."

Meanwhile, the National Public Service Data Breach (NPD) cast as "the slow-burn nightmare" by Wired's Lily Hay Newman on 16 August continues recently to unfurl—"Social Security numbers, physical addresses, and more—all available online." After months of confusion, leaked information from a background-check firm underscores the long-term risks of data breaches. What is worse is that the breach was just acknowledged in mid-August 2024, whereas the problem began late in 2023. The lack of openness may have been influenced by the fact that public acknowledge came just "…as a trove of the stolen data leaked publicly online."

By June 2024, a hacker, known as USDoD, was reportedly trying to sell 2.9 billion records in a "…trove of data on cybercriminal forums…that impacted the entire population of USA, CA and UK." It got worse. As of June, 100 email addresses and social security numbers with no email addresses were confirmed. The public announcement came in mid-August with fudgy references like "may have" or "appears" followed by the information being suspected of breaching names, email addresses, phone numbers, Social Security numbers and mailing addresses. The company says it has been cooperating with "law enforcement and governmental investigators." NPD is facing potential class action lawsuits over the breach.

Newman goes on to distinguish a theft from inside Target compared with a data broker when the company does not declare it, determining what is true is difficult. She concludes: "Typically, people whose data is compromised in a breach—the true victims—aren't even aware that National Public Data held their information in the first place." Sadly, to date there is no happy conclusion to this breach.

Moving on to the big picture, Foreign Affairs Wars of the Future August 5 explores the future of warfare with considerable focus on the role of AI. Its authors are certainly knowledgeable: they are former Chairman of the Joint Chiefs of Staff Mark Milley (2019-2023) and Eric Schmidt, former CEO and Chair of Google and co-author with Henry Kissinger (his last book) and Daniel Huttenlocker on "The Age of AI: And Our Human Future." They start with the Ukrainian war, with AI models on both sides "…racing to develop even more advanced technologies…" as Russian military field many AI-powered drones. Both training and technology are needed: "Robots and AI are here to stay."

They draw from WWII and Waterloo, Ukraine and Israel/Gaza, and noted that military strategists find it difficult to determine innovations that may "shape future battles." As it stands, they note that China and Russia are gaining AI ground, although the US is still strong. However, vulnerability despite AI is problematic on land and sea. They consider even worse: a scenario where "…AI warfare could endanger humanity." They underscore that the future should move to nimble units— "a critical advantage given the fast pace of AI-powered war." And that even if China and others refuse to agree on AI issues, economic restrictions against other states that refuse would be subject to limitations regarding military AI.

As a possible step in the right direction, Mikayla Easley on 21 August reported in Defense Scoop that Air Force Chief of Staff General David Allvin says that movement to integrate AI and Machine Learning (ML) tools is moving ahead, and that part of its plan is to "reoptimize" and revamp how it generates readiness.

As part of its plan to "reoptimize" for great power competition, the service is revamping how it generates readiness for contested environments in the Indo-Pacific by conducting new large-scale exercises, enhancing supply chains and more. Through these initiatives, the Air Force is looking at how AI and ML can enhance preparedness by improving command and control in operations and the way the organization accounts for spare parts, Allvin said Wednesday during a media roundtable with reporters.

General Allvin explains that the "Air Force is moving as fast as possible to integrate AI and ML tools into its efforts to boost readiness levels and prepare for future conflicts." In order to do this, he believes that the Air Force must integrate data from "various siloed systems."

General Allvin notes that "While issues surrounding AI for command-and-control efforts might still need to be worked through, the technology is a bit more mature for how the Air Force understands and prepares its inventory of spare parts. The service currently has a better understanding of its individual weapon systems and predicting where they are, when they might break and what additional parts it needs to have at the ready."

Initially entitled an AI article in the Economist "Oh, the things AI can do" (a nod to Dr. Seuss) explains that the opening volley reads: "Artificial Intelligence (AI) can be described as the art of getting computers to do things that seem smart to humans." It acknowledges multiple advantages of AI, but notes that they can't perform well at choosing which things they generate are most sensible or appropriate. It explores discussion of the robotic method and closes by "Wait, did a chatbot write this story?" and a reliance on humans to determine what to apply or discard, and judge AI's capacity for better or worse.

To see previous articles, please visit the Cyber Scene Archive.

Submitted by Gregory Rigby on