Cyber Scene - Cyber Threats and Counterthreats

By krahal

The world seems to twirl faster. We have recently enjoyed a rare solar eclipse and a visit by auroras. Mother Nature has also sent tornados—all within a few weeks. So too does cyber in the tech world, and perhaps even faster. The challenge is embracing and expanding the good and controlling the bad.

When issues are beyond a country's borders, as is cyber by nature, regulation with impact beyond states, regions, countries, and continents may be required. A current candidate may be OpenAI.

To review some of the recent threats, let us look at Wired's Andy Greenberg's 24 April discussion of new hacks of Cisco firewalls for accession to US government networks. This new hack, Arcane Door, which appears to come from a state-sponsored source due to the hack's focus and sophistication, has done the opposite of what firewalls are built to do. In this case they enclose hacks inside the firewalls by exploiting two zero-day vulnerabilities. This new case, discussed in late April, has been active over recent months: "Cisco is now revealing that its firewalls served as beachheads for sophisticated hackers penetrating multiple government networks around the world." Although Cisco did not identify the source, Wired was informed by sources familiar with the investigation that it was "…aligned with China's state interests."

Recent cyber threats continue to rise.

At the other end of the spectrum, your phone fragility is also of concern and subject to hacking. The Economist Science & Technology Cybersecurity of 17 May states it clearly: "For years exerts have warned that a technology at the center (sic) of global communications is dangerously exposed. Now there is more evidence that it has been used to snoop on people in America." Speaking to the Federal Communications Commission (FCC), a well-known regulator, Kevin Briggs from the familiar US Cybersecurity and Infrastructure Security Agency (CISA) cited not only many attempts at stealing locational data and voice and texts in the US but also to insert spyware and take over a phone with the attempt to influence voters from abroad via texts. Despite work on stronger defenses within the US, "…much of the world remains vulnerable."

For readers seeking more detail, these hacks were connected to "Signaling System 7 (SS7)," which was hacked as far back as April 2014 by Russian hackers spying on Ukrainian politicians. This also impacted in 2017 on German telecoms (stealing money from customers) and in 2018 by Israel's attempt via a British Channels Island operator to access SS7, and "…users around the world." China used the same access in 2014 in stealing data including phone numbers from the Office of Personnel Management. Despite counter activities to mitigate these attacks, Apple recently warned users in ninety-two countries that they had been targeted by a "mercenary spyware attack." A newer version of SS7 named Diameter was mentioned but not discussed in detail. However, it was described as "worse in some ways."

The article goes on to cite several Middle East, European, Asia and Russian dissident regions connected to SS7 attacks which Mr. Briggs considerers "the tip of the proverbial iceberg of SS7 and Diameter-based location and monitoring exploits that have been used successfully."

And how ubiquitous is SS7? It is greater than the number of worldwide internet users.

Another angle of worldwide cyber reach is a recent example of OpenAI, with pros and cons. State-level public policy media outlet Pluribus News on May 24 discusses the responsibilities of US state lawmakers to "…grapple with the challenges and potential that Artificial Intelligence is bringing to the already heavily regulated health care industry." They were particularly concerned about advances in "so-called" generative AI technology that could impact biases in treatment without the patients knowing about it.

The complexity of AI at the federal US level is seriously challenging. Lawfare's Kevin Frazier on 24 May analyzes Senate Majority Leader Chuck Schumer's (D-NY), long awaited Bipartisan Senate Working Group Roadmap for Artificial Intelligence Policy released on 15 May. Leader Schumer explained that the roadmap was intended to empower and inform committees, not to produce legislation. However, he added that it would benefit the committees if they were to "…come out with bipartisan legislation that we can pass…before the end of the year." The roadmap includes work on innovation in AI, the impact on the workforce, elections and democracy, privacy and liability, safeguards, intellectual property, national security, and other issues. Mr. Frazier notes that the savvy working group, including, in addition to Leader Schumer, Mike Rounds (R-SD), Martin Heinrich (S-NM) and Todd Youn (R-IN), hosted nine bipartisan AI Insight Forums to address the complexity and implications of AI.

A complex issue like AI is open to criticism but the Working Group intent was not, per Leader Schumer, "…an exhaustive list of policy proposals," but "…the beginning of a lengthy legislative process." This is connected to aiming for $32 billion per year as proposed by the National Security Commission on Artificial Intelligence (NSCAI) for non-defense AI innovation. It also underscores the importance of funding outstanding AI initiatives connected to and approved by the CHIPS and Science Act. Six additional issues are addressed in this pithy discussion.

Mr. Frazier also addressed elections and democracy among other issues, such as South Korea's 10 April election that mitigated electoral interference from AI. Following up on President Biden's October 2023 AI Executive Order, the Senate roadmap addresses the robust protections against AI disinformation while protecting First Amendment rights.

As a follow up, The Hill on 22 May reported that Leader Schumer urged Senate committee leaders to advance AI legislation; he had convened committee chairs that week after releasing guidance on how to regulate AI:

"I urged our Chairs to report out legislation to harness innovation and implement vital policy reforms to democratize access to AI, curb harmful bias, counter deepfakes, protect and adapt the American worker for the AI age, while delivering key guardrails and sustainable innovations concerning explainability, transparency, interoperability, and other actions to reduce societal risks of AI."

An additional AI issue that involves antitrust laws is also before the Federal Trade Commission (FTC). Again, The Hill on 23 May reports the FTC Chair Lina Khan, speaking before the Wall Street Journal's "Future of Everything Festival," states that AI models from new websites, artists' creations or people's personal information may be in violation of antitrust laws. She goes on to say that the FTC is looking at how major companies' data scraping could violate people's privacy rights.

Unlike the Senate moving forward on AI, the media is having AI disagreements. Washington Post's (Post) Laura Wagner and Gerrit De Vynck on 27 May capture a different agreement.

"As tech companies race to perfect machines that can already produce humanlike text, summarize long documents, and describe images and videos, media companies are struggling to figure out where they fit into the new gold rush."

Wheeling and dealing between media and other companies seems to be an option. OpenAI is not open about saying what data goes into its technology, and the Post article cites New York Times' suit against OpenAI and Microsoft for copyright infringement.

As AI's connections to other industries evolve, two former OpenAI board members, Helen Toner and Tasha McCauley, aver that government outside of AI is necessary, as reported in The Economist on 26 May. They are concerned about the challenge for a revolutionary technology to operate in its interest for both its shareholders and the "wider world." They would know: they worked for OpenAI for a total of 7 years, both leaving in 2023.They are ardently not in favor of self-regulation.

The European Union (EU) Parliament is holding its election between 6 and 9 June for two hundred million voters across twenty-seven countries. Per the Post's Lee Hockstader on 27 May, the EU has been very concerned about these elections, particularly more so since 2022, when "Moscow jumped into action." The sites' persistence, "popping up around the internet like mushrooms after a cloudburst," was a surprise as the EU had tried to close them down. The article goes on to note that social media has made disinformation and propaganda cheap. An enormous difference from past disinformation elections is the war in Ukraine: Russia's stakes are greater. And an additional concern is that "monitoring Russian mischief" will be even greater for the US presidential election in November.

One new protection the EU has engaged in, per the Post on 26 May and one that "officials and researchers from Arizona to Taiwan" are adopting, is called "prebunking." The process is to send out "weakened doses" of misinformation explaining why they are wrong--"mental antibodies" so the readers can discern hoaxes from real election issues. The EU already has reached out to millions of voters with cartoon ads on YouTube, Facebook, and Instagram so that lies or rumors can be identified. US federal agencies are encouraging state and local officials to use these approaches—Maricopa County, Arizona, has started—with CISA running practices with local officials to implement prebunking of some sort to prepare voters to find truth. The lack of a nationwide application is influenced, per the Post, by the "chilled collaboration in court rulings" regarding how to combat misinformation amid a conservative legal campaign, which might call out government censorship, which might be viewed as violating the First Amendment. It is seen as both complicated and political in the US.

The Military Times' Nikki Wentling sized up the status quo on 13 May entitled "Disinformation creates 'precarious' year for democracy." She calls out two challenges that concern some experts: the use of generative AI (for fake images/videos) and China as a newer adversary ready for disinformation. Russia has been addressed here and since 2016, and this article notes that Russian-originated propaganda and disinformation are more centralized now than before, according to Microsoft officials. Ms. Wentling closes by noting Russian efforts to undermine US support for Ukraine.

To see previous articles, please visit the Cyber Scene Archive.

Submitted by Gregory Rigby on