"GenAI Requires New, Intelligent Defenses"

"GenAI Requires New, Intelligent Defenses"

Business and public use of generative Artificial Intelligence (AI) calls for further understanding of generative AI risks and the specific defenses to mitigate those risks. Jailbreaking and prompt injection are two emerging threats to generative AI. Jailbreaking uses specific prompts to trick the AI into producing harmful or misleading results. Similar to SQL injection in databases, prompt injection hides malicious data or instructions within typical prompts, causing the model to produce unintended outputs and resulting in vulnerabilities or reputational risks.

Submitted by Gregory Rigby on

"Sumo Logic Completes Investigation Into Recent Security Breach"

"Sumo Logic Completes Investigation Into Recent Security Breach"

Cloud monitoring, log management, and SIEM tools provider Sumo Logic has recently completed its investigation into a recent security incident and is saying that it has found no evidence of impact to customer data.  The company stated that third-party forensic experts verified these findings, and the investigation of this incident is now complete and closed.  The company has also shared indicators of compromise (IoCs) and instructions on how customers can check their own environments.

Submitted by Adam Ekwall on

"US Cybersecurity Lab Suffers Major Data Breach"

"US Cybersecurity Lab Suffers Major Data Breach"

A leading US laboratory famed for cybersecurity, nuclear, and clean energy research has recently suffered a major breach of employee data.  Dating back to the 1940s, Idaho National Laboratory (INL) is responsible for generating the first usable electricity from nuclear power and developing the first nuclear propulsion systems for nuclear submarines and aircraft carriers.  Idaho National Laboratory determined that it was the target of a cybersecurity data breach, affecting the servers supporting its Oracle HCM system, which supports its human resources application.

Submitted by Adam Ekwall on

"Largest Study of its Kind Shows Outdated Password Practices are Widespread"

"Largest Study of its Kind Shows Outdated Password Practices are Widespread"

According to a new Georgia Tech cybersecurity study on the current state of password policies across the Internet, three out of four of the world's most popular websites fail to meet minimum requirement standards, allowing tens of millions of users to create weak passwords. Researchers discovered that 12 percent of websites completely lacked password length requirements. They made this discovery using a first-of-its-kind automated tool that can assess a website's password creation policies. Assistant Professor Frank Li and Ph.D.

Submitted by Gregory Rigby on

"Cybersecurity Insurance and Data Analysis Working Group Re-Envisioned to Help Drive Down Cyber Risk"

"Cybersecurity Insurance and Data Analysis Working Group Re-Envisioned to Help Drive Down Cyber Risk"

Nitin Natarajan, Deputy Director for the Cybersecurity and Infrastructure Security Agency (CISA), recently joined the Treasury Federal Insurance Office and the New York University Stern School of Business’ Volatility and Risk Institute at their conference on Catastrophic Cyber Risk and a Potential Federal Insurance Response, where he announced that CISA will relaunch the Cybersecurity Insurance and Data Analysis Working Group (CIDAWG).

Submitted by Gregory Rigby on

"What Is LockBit, the Cybercrime Gang Hacking Some of the World’s Largest Organizations?"

"What Is LockBit, the Cybercrime Gang Hacking Some of the World’s Largest Organizations?"

LockBit has become more visible, with several high-profile victims appearing on the group's website. The group hit the spotlight in 2019. Its high-profile victims include the UK's Royal Mail and the Ministry of Defence, as well as the Japanese cycling component manufacturer Shimano. LockBit has been linked to nearly 2,000 victims in the US alone since it first appeared on the cybercrime scene. This article continues to discuss insights from researchers at Edith Cowan University on the LockBit gang.

Submitted by Gregory Rigby on

"More Than 330,000 Medicare Recipients Affected by MOVEit Breach"

"More Than 330,000 Medicare Recipients Affected by MOVEit Breach"

A federal government agency revealed that over 330,000 Medicare recipients were affected by a sensitive data leak in the latest disclosures about a Russian ransomware gang's exploitation of the popular MOVEit file transfer service. The US Center for Medicare & Medicaid Services (CMS) provides health coverage to over 160 million people through Medicare, Medicaid, the Children's Health Insurance Program, and the Health Insurance Marketplace.

Submitted by Gregory Rigby on

"AI Art Generators Can Be Fooled Into Making NSFW Images"

"AI Art Generators Can Be Fooled Into Making NSFW Images"

SneakyPrompt is a new algorithm developed by a team of researchers that generates commands to circumvent the safety filters of text-to-image generative Artificial Intelligence (AI) models such as DALL-E 2 and Midjourney. The goal of this study is to find ways to improve those safeguards in the future. The algorithm's creators, which include researchers from Johns Hopkins University in Baltimore and Duke University in Durham, North Carolina, will present their findings at the IEEE Symposium on Security and Privacy in San Francisco in May 2024.

Submitted by Gregory Rigby on

"250 Organizations Take Part in Electrical Grid Security Exercise"

"250 Organizations Take Part in Electrical Grid Security Exercise"

More than 250 organizations recently participated in GridEx VII, the seventh edition of the biennial exercise focusing on the security of the electrical grid in the United States and Canada.  GridEx is organized by the Electricity Information Sharing and Analysis Center (E-ISAC) at the North American Electric Reliability Corporation (NERC) and is the largest grid security exercise in North America.  GridEx VII's main focus was testing crisis response and recovery plans for cyber and physical threats targeting the electrical grid.

 

Submitted by Adam Ekwall on

"Technique Enables AI on Edge Devices to Keep Learning over Time"

"Technique Enables AI on Edge Devices to Keep Learning over Time"

A team of researchers from MIT, the MIT-IBM Watson AI Lab, and other organizations developed a method for deep learning models to efficiently adapt to new sensor data directly on an edge device, thus increasing security and more. Personalized deep learning models can power Artificial Intelligence (AI) chatbots that adapt to understand a user's accent, as well as smart keyboards that regularly update to better predict the next word based on a user's typing history. This customization requires constant Machine Learning (ML) model fine-tuning with new data.

Submitted by Gregory Rigby on
Subscribe to