"Leaked GitHub Token Exposed Mercedes Source Code"

"Leaked GitHub Token Exposed Mercedes Source Code"

According to security researchers at RedHunt, a GitHub token leaked by a Mercedes-Benz employee provided access to all the source code stored on the carmaker’s GitHub Enterprise server.  The token, discovered during an internet scan, was leaked in the employee’s GitHub repository, providing unrestricted and unmonitored access to the source code.  The researchers stated that the breach occurred on September 29, 2023, but was not discovered until January 11, 2024.  Mercedes revoked the leaked token on January 24, two days after being alerted of the incident.

Submitted by Adam Ekwall on

"US Sanctions Two ISIS-Affiliated Cybersecurity Experts"

"US Sanctions Two ISIS-Affiliated Cybersecurity Experts"

The US Treasury Department recently announced sanctions against two "cybersecurity experts" accused of running a platform affiliated with the Islamic State group.  The sanctioned individuals are both Egyptian nationals.  One of them is Mu'min Al-Mawji Mahmud Salim, the creator of a platform named Electronic Horizons Foundation (EHF), which provides cybersecurity training and guidance to ISIS supporters.  The platform offers information on conducting cyber operations, including for evading law enforcement and working with cryptocurrencies.

Submitted by Adam Ekwall on

"Researchers Win Award for Study on Text Embedding Privacy Risks"

"Researchers Win Award for Study on Text Embedding Privacy Risks"

Four researchers from Cornell Tech won the Outstanding Paper Award at the 2023 Empirical Methods in Natural Language Processing (EMNLP) Conference for their paper titled "Text Embeddings Reveal (Almost) As Much As Text." Their paper delves into privacy concerns regarding text embeddings, a Natural Language Processing (NLP) technique that addresses the challenges posed by the nuanced and ambiguity of words and phrases. Machines can quickly and efficiently understand numbers, but human language is more complicated.

Submitted by grigby1 CPVI on

"Mapping Attacks on Generative AI to Business Impact"

"Mapping Attacks on Generative AI to Business Impact"

The IBM Institute for Business Value discovered that 84 percent of CEOs are concerned about widespread or destructive cyberattacks that generative Artificial Intelligence (AI) adoption could cause. As organizations consider how to incorporate generative AI into their business models and assess the security risks the technology may introduce, it is essential to look at the top attacks that threat actors could use against AI models.

Submitted by grigby1 CPVI on

"Your Printer May Spill All of Your Secrets"

"Your Printer May Spill All of Your Secrets"

Associate Research Professor Charles Harry at the University of Maryland shares his insights on the creativity of today's cyberattacks, as well as the five most unlikely places people could be vulnerable. Cyberattacks have grown in sophistication and complexity, with malicious hackers becoming more skilled at developing malware or gaining access to networks. Harry emphasizes that anyone who visits a commercial, government, or institutional website is a potential entry point.

Submitted by grigby1 CPVI on

"Russian APT28 Phishing Ukraine's Military to Steal Login Info"

"Russian APT28 Phishing Ukraine's Military to Steal Login Info"

Ukraine's National Cyber Security Coordination Center (NCSCC) has warned its military members about a new phishing campaign launched by the Russian-backed cybercriminal group APT28. According to the NCSCC, APT28 is targeting military personnel and units of the Ukrainian Defense Forces through phishing emails in an attempt to gain access to military email accounts. APT28, also known as Fancy Bear or Sandworm Team, was formed in 2004 and has been linked to Russia's General Staff Main Intelligence Directorate (GRU) 85th Main Special Service Center (GTsSS) military unit 26165.

Submitted by grigby1 CPVI on

"FBI and DOJ Disrupt Chinese Hacking Operation"

"FBI and DOJ Disrupt Chinese Hacking Operation"

In response to the Chinese state-sponsored hacking group Volt Typhoon targeting critical infrastructure in the US, the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) dismantled the group's infrastructure. It has been reported that the DOJ and the FBI sought and received a court order to disable the Volt Typhoon hacking campaign remotely.

Submitted by grigby1 CPVI on

"China-Linked Hackers Target Myanmar's Top Ministries with Backdoor Blitz"

"China-Linked Hackers Target Myanmar's Top Ministries with Backdoor Blitz"

According to CSIRT-CTI, Mustang Panda, a China-based threat actor, is suspected of targeting Myanmar's Ministry of Defence and Foreign Affairs as part of campaigns aimed at deploying backdoors and Remote Access Trojans (RATs). CSIRT-CTI noted that the activities occurred in November 2023 and January 2024, based on artifacts associated with the attacks uploaded to the VirusTotal platform.

Submitted by grigby1 CPVI on

"Researchers Map AI Threat Landscape, Risks"

"Researchers Map AI Threat Landscape, Risks"

According to a new report from the Berryville Institute of Machine Learning (BIML) titled "An Architectural Risk Analysis of Large Language Models," many of the security issues associated with Large Language Models (LLMs) stem from the fact that they all have a black box at their core. LLMs' end users typically have little information about how providers collected and cleaned the data used to train their models, and model developers generally conduct only a surface-level evaluation of the data due to the volume of information available.

Submitted by grigby1 CPVI on

"Italian Regulator Again Finds Privacy Problems in OpenAI"

"Italian Regulator Again Finds Privacy Problems in OpenAI"

The ChatGPT maker OpenAI has about a month to respond to the Italian data regulator following the agency's investigation that revealed the company's alleged violation of European privacy laws. In 2023, Garante, the Italian data protection authority temporarily banned OpenAI's Large Language Model (LLM) chatbot, citing a violation of the European General Data Protection Regulation (GDPR). It restored in-country access to the chatbot in April after OpenAI agreed to implement age verification and an opt-out form for removing personal data from the LLM.

Submitted by grigby1 CPVI on
Subscribe to