'New' Video shows suspected Saudi intel agent before 9/11 attacks scouting targets
QR Code and PDF Scams, ChatGPT Warnings, NUIT Voice Assistant Attacks
In this episode of GRXcerpts, we’ll cover technologies you thought were relatively safe but might not be after all. Specifically: - QR code and PDF Scams - NUIT attacks using voice assistants - ChatGPT Warnings Plus some hopeful news from Microsoft QR Code and PDF Scams QR codes made an epic comeback during the pandemic, but beware– threat actors are now using them, too. Called “scan scams,” this malicious campaign tricks users into scanning QR codes from their PCs using mobile devices to take advantage of weaker phishing protection and detection on these devices. The QR codes direct victims to malicious websites asking for credit and debit card details. As an example, campaigns detected in Q4 of last year were phishing campaigns disguised as parcel delivery companies seeking payment. And on a similar note, threat actors are also using PDFs to embed images that link to encrypted malicious ZIP files, bypassing gateway scanners. The PDF instructions contain a password and trick the user into entering it to unpack a ZIP file, then deploy QakBot or IcedID malware to gain unauthorized access to systems, which are used as beachheads to deploy ransomware. The HP Threat Research team reports a 38% rise in malicious PDF attachments. “Users should look out for emails and websites that ask to scan QR codes and give up sensitive data, and PDF files linking to password-protected archives,” said Alex Holland, a Senior Malware Analyst from HP’s Security Team. NUIT Voice Assistant Attacks New research shows hackers are controlling smart devices by exploiting voice assistants, including Siri, Google Assistant, Alexa, and Windows Cortana. Using a technique called Near-Ultrasound Inaudible Trojan or NUIT, hackers use sounds undetectable by humans to give commands to voice assistants, ranging from playing music to unlocking doors. According to researchers at the University of Texas at San Antonio and the University of Colorado at Colorado Springs, the attacks can occur when a user is browsing a website playing NUIT attack commands in the background. If the victim has a mobile phone with voice control enabled nearby, inaudible commands may activate the assistant and then use it as a conduit for malicious activity. To offset the danger, researchers advise keeping receiving speakers lower in volume or using earbuds and headsets. ChatGPT Warnings ChatGPT has been a popular AI tool used for a variety of tasks, from creating content to research to writing code. But new data shows that more than 4% of employees have put sensitive data into the large language model, raising concerns that careless usage may result in massive leaks of proprietary information. And, as more employees adopt ChatGPT and other AI-based productivity tools, the risks will only grow. Some companies are already taking action, including JPMorgan restricting employee use of ChatGPT, and Amazon, Microsoft, and Walmart warning employees to use care when using generative AI services. Karla Grossenbacher, a partner at law firm Seyfarth Shaw, expects that “prudent employers will include prohibitions in employee confidentiality agreements and corporate policies regarding entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT.” ChatGPT is making the headlines for other reasons, too. ChatGPT creator OpenAI has confirmed a bug in the open-source library that may have shown the titles of your chats and the first message of a newly created conversation to other users. The issue was related to ChatGPT’s use of Redis-py, an open-source Redis client library introduced by OpenAI in late March. The bug also exposed payment-related information of a small number of ChatGPT Plus subscribers, including name, email address, payment address, payment card expiration date, and the last four digits of the customer’s credit card number. And lastly, threat intelligence company GreyNoise warned about a new ChatGPT feature that expands the chatbot’s information-collecting capabilities through plugins. While OpenAI designed the new feature to ensure data is secure and private, GreyNoise expressed concern over the code examples provided by OpenAI to customers. Specifically, the code examples included a docker image for the MinIO distributed object storage system, a documented information disclosure vulnerability. GreyNoise noted that there is no information suggesting that a specific bad actor is targeting ChatGPT, but recommends ChatGPT users who are taking advantage of the plugins upgrade to a patched version of MinIO to avoid any potential breaches. All information is current as of March 28, 2023. Subscribe to receive future episodes as they are released. #chatgpt #malware #voiceassistant #cyberrisk #cybersecurity #cyberthreats #cyberthreatintelligence #ransomware #qrcode