Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Fri, 01 Mar 2024 11:45:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Kaspersky official blog https://www.kaspersky.com/blog 32 32 How to store Location History in Android in 2024? | Kaspersky official blog https://www.kaspersky.com/blog/google-location-history-security-2024/50725/ Fri, 01 Mar 2024 11:45:46 +0000 https://www.kaspersky.com/blog/?p=50725 Of all the accusations routinely hurled Google’s way, there’s one that especially alarms users: the company can track the location of all Android — and to some extent, Apple — phones. Past experience suggests that Google indeed does this — not only using this data to display ads, but also storing it in Location History and even providing it to law enforcement agencies. Now Google promises to only store Location History on the device. Should we believe it?

What’s wrong with Location History?

Location History lets you easily view the places a user visited and when they did so. You can use it for all kinds of things: remembering the name of that beach or restaurant you went to while on vacation two years ago, finding the address of a place your better half often goes to after work, getting new bar suggestions based on the ones you’ve been to, locating the florist that delivered the surprise bouquet for a party, and many more. The different ways this feature both benefits and harms Google account holders are commonly reported. Little wonder then that many — even those with a clean consciences — often want to turn it off completely.

Regrettably, Google has often been caught abusing its Location History setting. Even if explicitly disabled, Location History was still collected under “Web & App Activity”. This led to a series of lawsuits, which Google lost. In 2023, the company was ordered to pay $93 million under one suit, and a year earlier $392 million under another. These fines were but a pinprick to a corporation with hundreds of billions of dollars in revenue, but at least the court had Google revise its location tracking practices.

The combined legal and public pressure apparently led to the company announcing at the end of 2023 a drastic change: now, according to Google, Location History will be collected and stored on users’ devices only. But does that make the feature any more secure?

How does Location History (supposedly) work in 2024?

First of all, check that the feature has been updated on your device. As is wont with Google, updates for the billions of Android devices roll out in waves, and to relatively recent OS versions only. So, unless you see an alert that looks like the one below, it’s likely your device hasn’t received the update, and enabling Location History will save the data on Google’s servers.

Unless Google has explicitly warned you that your Location History will be stored on your device, it's likely to continue being saved to Google's servers

Unless Google has explicitly warned you that your Location History will be stored on your device, it’s likely to continue being saved to Google’s servers

If your Location History is now stored locally, however, Google Maps will offer options for centralized management of your “places”. By selecting a point on the map, such as a coffee shop, and opening its description, you’ll see all the times you visited the place in the past, all searches for the place on the map, and other things like that. One tap on the location card can delete all of your activity associated with the place.

Google says it will store the history for each place for three months by default and then delete it. To change this setting or disable history, simply tap the blue dot on the map that shows your current location and turn off Location History in the window that pops up.

Options for configuring and disabling Location History

Options for configuring and disabling Location History

An obvious downside to offline Location History is that it won’t be accessible to the user on their other devices. As a workaround, Google suggests storing an encrypted backup on its servers.

Keep in mind that what we’re discussing here is the new implementation of Location History as described by Google. Detailed analysis of how this new pattern actually works may reveal pitfalls and caveats that no one except Google’s developers knows about at this point.

What threats does this update eliminate?

Although the new storage method improves the privacy of location data, it can’t be considered a one-size-fits-all solution to all existing issues. So how does it affect various hypothetical threat scenarios?

  • Tracking you to customize ads. This is unlikely to be affected in any way: Google can continue to collect data on places you visit in an anonymized, generalized form. You’ll keep seeing ads linked to your current or past locations unless you disable either that or all targeted ads entirely. Remember that Google isn’t the only one out there tracking your location. Other apps and services have been found guilty of abusing this data as well; here are a few examples: one, two, and three.
  • Evil hackers and cyberspies. These malicious groups typically use commercial spyware (stalkerware) or malicious implants, so the changes to Google’s Location History will hardly affect them.
  • Jealous partner or prying relative. It’ll be harder to use a computer on which you’re signed in to your Google account to track your location. Someone could still quietly snoop on your phone while it’s unlocked, as well as secretly install commercial spyware such as stalkerware, which we mentioned above. Therefore, it’s general steps to protect smartphones from mobile spyware, not the updates to Google Maps, that are crucial to addressing this.
  • Law enforcement. This isn’t likely to change much, as, in addition to asking Google, the police can request your location data from the mobile carrier or deduce it from surveillance camera footage, which is both easier and faster.

So, the update doesn’t help user privacy all that much, does it? We’re afraid not.

How do I effectively protect my location data?

You’re limited to fairly drastic options these days if you want to prevent location tracking. We list these here in ascending order of extremity.

  • Use comprehensive security on all your devices, including phones and tablets. This will reduce the likelihood of being exposed to malware, including stalkerware.
  • Disable Google Location History and Web & App Activity, avoid giving location permissions to any apps except navigation apps, turn off personalized ads, and use a DNS service that filters ads.
  • Turn off all geo-tracking features (GPS, Google location services, and others) on your smartphone.
  • When on an especially important trip, activate flight mode for an hour or two, or just turn off your smartphone.
  • Ditch smartphones in favor of the most basic dumbphones.
  • Ultimately, stop carrying around any kind of phone at all.
  • Live 100% off-grid; e.g., in a cave.
]]>
full large medium thumbnail
Transatlantic Cable podcast episode 335 | Kaspersky official blog https://www.kaspersky.com/blog/transatlantic-cable-podcast-335/50707/ Wed, 28 Feb 2024 17:11:33 +0000 https://www.kaspersky.com/blog/?p=50707 Episode 335 of the Transatlantic Cable Podcast kicks off with news that Apple are already preparing for a post-quantum world with their latest iMessage update. From there the team discuss criticism around Google’s ‘woke’ AI picture issues.

Following that, the team wrap up with two stories, the first around Air Canada’s chatbot giving incorrect refund advice to a customer, and a spoon-bending magician says he was paid to create a fake Biden robocall.

If you like what you heard, please consider subscribing.

]]>
full large medium thumbnail
VoltSchemer: attacks on wireless chargers through the power supply | Kaspersky official blog https://www.kaspersky.com/blog/voltschemer-attack-wireless-chargers/50710/ Wed, 28 Feb 2024 12:15:56 +0000 https://www.kaspersky.com/blog/?p=50710 A group of researchers from the University of Florida has published a study on a type of attack using Qi wireless chargers, which they’ve dubbed VoltSchemer. In the study, they describe in detail how these attacks work, what makes them possible, and what results they’ve achieved.

In this post, first we’ll discuss the researchers’ main findings. Then we’ll explore what it all means practically speaking — and whether you should be concerned about someone roasting your smartphone through a wireless charger.

The main idea behind the VoltSchemer attacks

The Qi standard has become the dominant one in its field: it’s supported by all the latest wireless chargers and smartphones capable of wireless charging. VoltSchemer attacks exploit two fundamental features of the Qi standard.

The first is the way the smartphone and wireless charger exchange information to coordinate the battery charging process: the Qi standard has a communication protocol that uses the only “thing” connecting the charger and the smartphone — a magnetic field — to transmit messages.

The second feature is the way that wireless chargers are intended for anyone to freely use. That is, any smartphone can be placed on any wireless charger without any kind of prior pairing, and the battery will start charging immediately. Thus, the Qi communication protocol involves no encryption — all commands are transmitted in plain text.

It is this lack of encryption that makes communication between charger and smartphone susceptible to man-in-the-middle attacks; that is, said communication can be intercepted and tampered with. That, coupled with the first feature (use of the magnetic field), means such tampering  is not even that hard to accomplish: to send malicious commands, attackers only need to be able to manipulate the magnetic field to mimic Qi-standard signals.

VoltSchemer attack: malicious power adapter

To illustrate the attack, the researchers created a malicious power adapter: an overlay on a regular wall USB socket. Source

And that’s exactly what the researchers did: they built a “malicious” power adapter disguised as a wall USB socket, which allowed them to create precisely tuned voltage noise. They were able to send their own commands to the wireless charger, as well as block Qi messages sent by the smartphone.

Thus, VoltSchemer attacks require no modifications to the wireless charger’s hardware or firmware. All that’s necessary is to place a malicious power source in a location suitable for luring unsuspecting victims.

Next, the researchers explored all the ways potential attackers could exploit this method. That is, they considered various possible attack vectors and tested their feasibility in practice.

VoltSchemer attack: general outline and attack vectors

VoltSchemer attacks don’t require any modifications to the wireless charger itself — a malicious power source is enough. Source

1. Silent commands to Siri and Google Assistant voice assistants

The first thing the researchers tested was the possibility of sending silent voice commands to the built-in voice assistant of the charging smartphone through the wireless charger. They copied this attack vector from their colleagues at Hong Kong Polytechnic University, who dubbed this attack Heartworm.

Heartworm attack: the general idea

The general idea of the Heartworm attack is to send silent commands to the smartphone’s voice assistant using a magnetic field. Source

The idea here is that the smartphone’s microphone converts sound into electrical vibrations. It’s therefore possible to generate these electrical vibrations in the microphone directly using electricity itself rather than actual sound. To prevent this from happening, microphone manufacturers use electromagnetic shielding — Faraday cages. However, there’s a key nuance here: although these shields are good at suppressing the electrical component, they can be penetrated by magnetic fields.

Smartphones that can charge wirelessly are typically equipped with a ferrite screen, which protects against magnetic fields. However, this screen is located right next to the induction coil, and so doesn’t cover the microphone. Thus, today’s smartphone microphones are quite vulnerable to attacks from devices capable of manipulating magnetic fields — such as wireless chargers.

Heartworm attack: lack of protection in today's smartphones

Microphones in today’s smartphones aren’t protected from magnetic field manipulation. Source

The creators of VoltSchemer expanded the already known Heartworm attack with the ability to affect the microphone of a charging smartphone using a “malicious” power source. The authors of the original attack used a specially modified wireless charger for this purpose.

2. Overheating a charging smartphone

Next, the researchers tested whether it’s possible to use the VoltSchemer attack to overheat a smartphone charging on the compromised charger. Normally, when the battery reaches the required charge level or the temperature rises to a threshold value, the smartphone sends a command to stop the charging process.

However, the researchers were able to use VoltSchemer to block these commands. Without receiving the command to stop, the compromised charger continues to supply energy to the smartphone, gradually heating it up — and the smartphone can’t do anything about it. For cases such as this, smartphones have emergency defense mechanisms to avoid overheating: first, the device closes applications, and if that doesn’t help it shuts down completely.

VoltSchemer attack: overheating the charging smartphone

Using the VoltSchemer attack, researchers were able to heat a smartphone on a wireless charger to a temperature of 178°F — approximately 81°C. Source

Thus, the researchers were able to heat a smartphone up to a temperature of 81°C (178°F), which is quite dangerous for the battery — and in certain circumstances could lead to its catching fire (which could of course lead to other things catching fire if the charging phone is left unattended).

3. “Frying” other stuff

Next, the researchers explored the possibility of “frying” various other devices and everyday items. Of course, under normal circumstances, a wireless charger shouldn’t activate unless it receives a command from the smartphone placed on it. However, with the VoltSchemer attack, such a command can be given at any time, as well as a command to not stop charging.

Now, take a guess what will happen to any items lying on the charger at that moment! Nothing good, that’s for sure. For example, the researchers were able to heat a paperclip to a temperature of 280°C (536°F) — enough to set fire to any attached documents. They also managed to fry to death a car key, a USB flash drive, an SSD drive, and RFID chips embedded in bank cards, office passes, travel cards, biometric passports and other such documents.

VoltSchemer attack: frying external objects and devices

Also using the VoltSchemer attack, researchers were able to disable car keys, a USB flash drive, an SSD drive, and several cards with RFID chips, as well as heat a paperclip to a temperature of 536°F — 280°C. Source

In total, the researchers examined nine different models of wireless chargers available in stores, and all of them were vulnerable to VoltSchemer attacks. As you might guess, the models with the highest power pose the greatest danger, as they have the most potential to cause serious damage and overheat smartphones.

Should you fear a VoltSchemer attack in real life?

Protecting against VoltSchemer attacks is fairly straightforward: simply avoid using public wireless chargers and don’t connect your own wireless charger to any suspicious USB ports or power adapters.

While VoltSchemer attacks are quite interesting and can have spectacular results, their real-world practicality is highly questionable. Firstly, such an attack is very difficult to organize. Secondly, it’s not exactly clear what the benefits to an attacker would be — unless they’re a pyromaniac, of course.

But what this research clearly demonstrates is how inherently dangerous wireless chargers can be — especially the more powerful models. So, if you’re not completely sure of the reliability and safety of a particular wireless charger, you’d be wise to avoid using it. While wireless charger hacking is unlikely, the danger of your smartphone randomly getting roasted due to a “rogue” charger that no longer responds to charging commands isn’t entirely absent.

]]>
full large medium thumbnail
Toy robot security issues | Kaspersky official blog https://www.kaspersky.com/blog/robot-toy-security-issue/50630/ Tue, 27 Feb 2024 15:00:33 +0000 https://www.kaspersky.com/blog/?p=50630 Kaspersky experts recently studied the security of a popular toy robot model, finding major issues that allowed malicious actors to make a video call to any such robot, hijack the parental account, or, potentially, even upload modified firmware. Read on for the details.

What a toy robot can do

The toy robot model that we studied is a kind of hybrid between a smartphone/tablet and a smart-speaker on wheels that enables it to move about. The robot has no limbs, so rolling around the house is its only option to physically interact with its environment.

The robot’s centerpiece is a large touchscreen that can display a control UI, interactive learning apps for kids, and a lively, detailed animated cartoon-like face. Its facial expressions change with context: to their credit the developers did a great job on the robot’s personality.

You can control the robot with voice commands, but some of its features don’t support these, so sometimes you have to catch the robot and poke its face the built-in screen.

In addition to a built-in microphone and a rather loud speaker, the robot has a wide-angle camera placed just above the screen. A key feature touted by the vendor is parents’ ability to video-call their kids right through the robot.

On the front face, about halfway between the screen and the wheels, is an extra optical-object-recognition sensor that helps the robot avoid collisions. Obstacle recognition being totally independent of the main camera, the developers very usefully added a physical shutter that completely covers the latter.

So, if you’re concerned that someone might be peeping at you and/or your child through that camera — sadly not without reason as we’ll learn later — you can simply close the shutter. And in case you’re worried that someone might be eavesdropping on you through the built-in microphone, you can just turn off the robot (and judging by the time it takes to boot back up, this is an honest-to-goodness shutdown — not a sleep mode).

As you’d expect, an app for controlling and monitoring the toy is available for parents to use. And, as you must have guessed by now, it’s all connected to the internet and employs a bunch of cloud services under the hood. If you’re interested in the technical details, you can find these in the full version of the security research, which we’ve published on Securelist.

As usual, the more complex the system — the more likely it is to have security holes, which someone might try to exploit to do something unsavory. And here we’ve reached the key point of this post: after studying the robot closely, we found several serious vulnerabilities.

Unauthorized video calling

The first thing we found during our research was that malicious actors could make video calls to any robot of this kind. The vendor’s server issued video session tokens to anyone who had both the robot ID and the parent ID. The robot’s ID wasn’t hard to brute-force: every toy had a nine-character ID similar to the serial number printed on its body, with the first two characters being the same for every unit. And the parent’s ID could be obtained by sending a request with the robot ID to the manufacturer’s server without any authentication.

Thus, a malicious actor who wanted to call a random child could either try to guess a specific robot’s ID, or play a chat-roulette game by calling random IDs.

Complete parental account hijack

It doesn’t end there. The gullible system let anyone with a robot ID retrieve lots of personal information from the server: IP address, country of residence, kid’s name, gender, age — along with details of the parental account: parent’s email address, phone number, and the code that links the parental app to the robot.

This, in turn, opened the door for a far more hazardous attack: complete parental-account hijack. A malicious actor would only have needed to have taken a few simple steps:

  • The first one would have been to log in to the parental account from their own device by using the email address or phone number obtained previously. Authorization required submitting a six-digit one-time code, but login attempts were unlimited so trivial brute-forcing would have done the trick.
  • It would only have taken one click to unlink the robot from the true parental account.
  • Next would have been linking it to the attacker’s account. Account verification relied on the linking-code mentioned above, and the server would send it to all comers.

A successful attack would have resulted in the parents losing all access to the robot, and recovering it would have required contacting tech support. Even then, the attacker could still have repeated the whole process again, because all they needed was the robot ID, which remained unchanged.

Uploading modified firmware

Finally, as we studied the way that the robot’s various systems functioned, we discovered security issues with the software update process. Update packages came without a digital signature, and the robot installed a specially formatted update archive received from the vendor’s server without running any verifications first.

This opened possibilities for attacking the update server, replacing the archive with a modified one, and uploading malicious firmware that let the attacker execute arbitrary commands with superuser permissions on all robots. In theory, the attackers would then have been able to assume control over the robot’s movements, use the built-in cameras and microphones for spying, make calls to robots, and so on.

How to stay safe

This tale has a happy ending, though. We informed the toy’s developers about the issues we’d discovered, and they took steps to fix them. The vulnerabilities described above have all been fixed.

In closing, here are a few tips on staying safe while using various smart gadgets:

  • Remember that all kinds of smart devices — even toys — are typically highly complex digital systems whose developers often fail to ensure secure and reliable storage of user data.
  • As you shop for a device, be sure to closely read user feedback and reviews and, ideally, any security reports if you can find them.
  • Keep in mind that the mere discovery of vulnerabilities in a device doesn’t make it inferior: issues can be found anywhere. What you need to look for is the vendor’s response: it’s a good sign if any issues have been fixed. It’s not a good thing if the vendor appears not to care.
  • To avoid being spied or eavesdropped on by your smart devices, turn them off when you’re not using them, and shutter or tape over the camera.
  • Finally, it goes without saying that you should protect all your family members’ devices with a reliable security solution. A toy-robot hack is admittedly an exotic threat — but the likelihood of encountering other types of online threats is still very high these days.
]]>
full large medium thumbnail
Apple has released a new way to protect instant messaging in iMessage | Kaspersky official blog https://www.kaspersky.com/blog/apple-pq3-quantum-secure-messaging/50692/ Fri, 23 Feb 2024 07:42:09 +0000 https://www.kaspersky.com/blog/?p=50692 The widespread use of quantum computers in the near future may allow hackers to decrypt messages that were encrypted with classical cryptography methods at astonishing speed. Apple has proposed a solution to this potential problem: after the next update of their OSes, conversations in iMessage will be protected by a new post-quantum cryptographic protocol called PQ3. This technology allows you to change the algorithms of end-to-end encryption with a public key so that they can work on classical non-quantum computers, but will provide protection against potential hacking coming from using future quantum computers.

Today we’ll go over how this new encryption protocol works, and why it’s needed.

How PQ3 works

All popular instant messaging applications and services today implement standard asymmetric encryption methods using a public and private key pair. The public key is used to encrypt sent messages and can be transmitted over insecure channels. The private key is most commonly used to create symmetric session keys that are then used to encrypt messages.

This level of security is sufficient for now, but Apple is playing it safe – fearing that hackers may be preparing for quantum computers ahead of time. Due to the low cost of data storage, attackers can collect huge amounts of encrypted data and store it until it can be decrypted using quantum computers.

To prevent this, Apple has developed a new cryptographic protection protocol called PQ3. The key exchange is now protected with an additional post-quantum component. It also minimizes the number of messages that could potentially be decrypted.

Types of cryptography used in messengers

Types of cryptography used in messengers. Source

The PQ3 protocol will be available in iOS 17.4, iPadOS 17.4, macOS 14.4, and watchOS 10.4. The transition to the new protocol will be gradual: firstly, all user conversations on PQ3-enabled devices will be automatically switched to this protocol; then, later in 2024, Apple plans to completely replace the previously used protocol of end-to-end encryption.

Generally, credit is due to Apple for this imminent security boost; however, the company isn’t the first to provide post-quantum cybersecurity of instant messaging services and applications. In the fall of 2023, Signal’s developers added support for a similar protocol – PQXDH, which provides post-quantum instant messaging security for users of updated versions of Signal when creating new secure chats.

How the advent of PQ3 will affect the security of Apple users

In essence, Apple is adding a post-quantum component to iMessage’s overall message encryption scheme. In fact, PQ3 will only be one element in its security approach along with traditional ECDSA asymmetric encryption.

However, relying solely on post-quantum protection technologies isn’t advised. Igor Kuznetsov, Director of Kaspersky’s Global Research and Analysis Team (GReAT), commented on Apple’s innovations as follows:

“Since PQ3 still relies on traditional signature algorithms for message authentication, a man-in-middle attacker with a powerful quantum computer (yet to be created) may still have a chance of hacking it.

Does it offer protection against adversaries capable of compromising the device or unlocking it? No, PQ3 only protects the transport layer. Once a message is delivered to an iDevice, there’s no difference – it can be read from the screen, extracted by law enforcement after unlocking the phone, or exfiltrated by advanced attackers using Pegasus, TriangleDB or similar software.”

Thus, those concerned about the protection of their data should not rely only on modern post-quantum cryptographic protocols. It’s important to ensure full protection of your device to make sure third-parties can’t reach your instant messages.

]]>
full large medium thumbnail
Transatlantic Cable podcast episode 334 | Kaspersky official blog https://www.kaspersky.com/blog/transatlantic-cable-podcast-334/50674/ Thu, 22 Feb 2024 20:51:14 +0000 https://www.kaspersky.com/blog/?p=50674 In today’s episode of the Transatlantic Cable podcast, the team look at news that companies at the fore-front of generative AI are looking to ‘take action’ on deceptive AI in upcoming elections. From there, the team discuss news that the Canadian government is set to take action against devices such as Flipper Zero, in an apparent fight against criminal activity.

To wrap up, the team discuss news that international police agencies have taken down LockBit – the infamous ransomware gang. Additionally, the team discuss a bizarre story around Artificial Intelligence, blue aliens and job applications – yes, really.

If you liked what you heard, please consider subscribing.

]]>
full large medium thumbnail
Credential phishing targets ESPs through ESPs https://www.kaspersky.com/blog/sendgrid-credentials-phishing/50662/ Thu, 22 Feb 2024 10:00:06 +0000 https://www.kaspersky.com/blog/?p=50662 Mailing lists that companies use to contact customers have always been an interesting target for cyberattacks. They can be used for spamming, phishing, and even more sophisticated scams. If, besides the databases, the attackers can gain access to a legitimate tool for sending bulk emails, this significantly increases the chances of success of any attack. After all, users who have agreed to receive emails and are accustomed to consuming information in this way are more likely to open a familiar newsletter than some unexpected missive. That’s why attackers regularly attempt to seize access to companies’ accounts held with email service providers (ESPs). In the latest phishing campaign we’ve uncovered, the attack method has been refined to target credentials on the website of the ESP SendGrid by sending phishing emails directly through the ESP itself.

Why is phishing through SendGrid more dangerous in this case?

Among the tips we usually give in phishing-related posts, we most often recommend taking a close look at the domain of the site in the button or text hyperlink that you’re invited to click or tap. ESPs, as a rule, don’t allow direct links to client websites to be inserted in an email, but rather serve as a kind of redirect — inside the link the email recipient sees the domain of the ESP, which then redirects them to the site specified by the mail authors when setting up the mailing campaign. Among other things, this is done to collect accurate analytics.

In this case, the phishing email appears to come from the ESP SendGrid, expressing concern about the customer’s security and highlighting the need to enable two-factor authentication (2FA) to prevent outsiders from taking control of their account. The email explains the benefits of 2FA and provides a link to update the security settings. This leads, as you’ve probably already guessed, to some address in the SendGrid domain (where the settings page would likely be located if the email really was from SendGrid).

To all email scanners, the phishing looks like a perfectly legitimate email sent from SendGrid’s servers with valid links pointing to the SendGrid domain. The only thing that might alert the recipient is the sender’s address. That’s because ESPs put the real customer’s domain and mailing ID there. Most often, phishers make use of hijacked accounts (ESPs subject new customers to rigorous checks, while old ones who’ve already fired off some bulk emails are considered reliable).

An email seemingly from SendGrid

An email seemingly from SendGrid sent through SendGrid to phish a SendGrid account.

Phishing site

This is where the attackers’ originality comes to an end. SendGrid redirects the link-clicking victim to a regular phishing site mimicking an account login page. The site domain is “sendgreds”, which at first glance looks very similar to “sendgrid”.

A site mimicking the SendGrid login page

A site mimicking the SendGrid login page. Note the domain in the address bar

How to stay safe

Since the email is sent through a legitimate service and shows no typical phishing signs, it may slip through the net of automatic filters. Therefore, to protect company users, we always recommend deploying solutions with advanced anti-phishing technology not only at the mail gateway level but on all devices that have access to the internet. This will block any attempted redirects to phishing sites.

And yes, for once it’s worth heeding the attackers’ advice and enabling 2FA. But not through a link in a suspicious email, but in the settings in your account on ESP’s website.

Update. We contacted Twilio and received the following statement from their spokesperson:

Impersonating a site administrator, or other critical function, has proven an effective means of phishing across the industry, and Twilio SendGrid takes abuse of its platform and services very seriously. Twilio detected that bad actors obtained customer account credentials and used our platform to launch phishing attacks; our fraud, compliance and cyber security teams immediately shut down accounts identified and associated with the phishing campaign. We encourage all end users to take a multi-pronged approach to combat phishing attacks, including two factor authentication, IP access management, and using domain-based messaging.

]]>
full large medium thumbnail
The biggest ransomware attacks of 2023 | Kaspersky official blog https://www.kaspersky.com/blog/ransowmare-attacks-in-2023/50634/ Tue, 20 Feb 2024 13:13:27 +0000 https://www.kaspersky.com/blog/?p=50634 Time was when any ransomware incident would spark a lively press and public reaction. Fast forward to the present, and the word “ransomware” in a headline doesn’t generate nearly as much interest: such attacks have become commonplace. Nonetheless, they continue to pose a grave threat to corporate security. This review spotlights the biggest and most high-profile incidents that occurred in 2023.

January 2023: LockBit attack on the UK’s Royal Mail

The year kicked off with the LockBit group attacking Royal Mail, the UK’s national postal service. The attack paralyzed international mail delivery, leaving millions of letters and parcels stuck in the company’s system. On top of that, the parcel tracking website, online payment system, and several other services were also crippled; and at the Royal Mail distribution center in Northern Ireland, printers began spewing out copies of the LockBit group’s distinctive orange ransom note.

LockBit demands a ransom from Royal Mail

The LockBit ransom note that printers at the Royal Mail distribution center began printing in earnest. Source

As is commonly the case with modern ransomware attacks, LockBit threatened to post stolen data online unless the ransom was paid. Royal Mail refused to pay up, so the data ended up being published.

February 2023: ESXiArgs attacks VMware ESXi servers worldwide

February saw a massive automated ESXiArgs ransomware attack on organizations through the RCE vulnerability CVE-2021-21974 in VMware ESXi servers. Although VMware released a patch for this vulnerability back in early 2021, the attack left more than 3000 VMware ESXi servers encrypted.

The attack operators demanded just over 2BTC (around $45,000 at the time of the attack). For each individual victim they generated a new Bitcoin wallet and put its address in the ransom note.

ESXiArgs ransom note

Ransom demand from the original version of ESXiArgs ransomware. Source

Just days after the attack began, the cybercriminals unleashed a new strain of the cryptomalware, making it far harder to recover encrypted virtual machines. To make their activities more difficult to trace, they also stopped giving out ransom wallet addresses, prompting victims to make contact through the P2P messenger Tox instead.

March 2023: Clop group widely exploits a zero-day in GoAnywhere MFT

In March 2023, the Clop group began widely exploiting a zero-day vulnerability in Fortra’s GoAnywhere MFT (managed file transfer) tool. Clop is well-known for its penchant for exploiting vulnerabilities in such services: in 2020–2021, the group attacked organizations through a hole in Accelon FTA, switching in late 2021 to exploiting a vulnerability in SolarWinds Serv-U.

In total, more than 100 organizations suffered attacks on vulnerable GoAnywhere MFT servers, including Procter & Gamble, the City of Toronto, and Community Health Systems — one of the largest healthcare providers in the U.S.

Map of Fortra GoAnywhere MFT servers accessible online

Map of GoAnywhere MFT servers connected to the internet. Source

April 2023: NCR Aloha POS terminals disabled by BlackCat attack

In April, the ALPHV group (aka BlackCat —  after the ransomware it uses) attacked NCR, a U.S. manufacturer and servicer of ATMs, barcode readers, payment terminals, and other retail and banking equipment.

The ransomware attack shut down the data centers handling the Aloha POS platform — which is used in restaurants, primarily fast food — for several days.

NCR Aloha POS platform

NCR Aloha POS platform disabled by the ALPHV/BlackCat group. Source

Essentially, the platform is a one-stop shop for managing catering operations: from processing payments, taking online orders, and operating a loyalty program, to managing the preparation of dishes in the kitchen and payroll accounting. As a result of the ransomware attack on NCR, many catering establishments were forced to revert to pen and paper.

May 2023: Royal ransomware attack on the City of Dallas

Early May saw a ransomware attack on municipal services in Dallas, Texas — the ninth most populous city in the U.S. Most affected were IT systems and communications of the Dallas Police Department, and printers on the City of Dallas network began churning out ransom notes.

Royal ransomware extorts the City of Dallas

The Royal ransom note printed out through City of Dallas network printers. Source

Later that month, there was another ransomware attack on an urban municipality: the target this time was the City of Augusta in the U.S. state of Georgia, and the perpetrators were the BlackByte group.

June 2023: Clop group launches massive attacks through vulnerability in MOVEit Transfer

In June, the same Clop group responsible for the February attacks on Fortra GoAnywhere MFT began exploiting a vulnerability in another managed file transfer tool — Progress Software’s MOVEit Transfer. This vulnerability, CVE-2023-34362, was disclosed and fixed by Progress on the last day of May, but as usual, not all clients managed to apply the patches quickly enough.

This ransomware attack — one of the largest incidents of the year — affected numerous organizations, including the oil company Shell, the New York City Department of Education, the BBC media corporation, the British pharmacy chain Boots, the Irish airline Aer Lingus, the University of Georgia, and the German printing equipment manufacturer Heidelberger Druckmaschinen.

Clop demands a ransom

The Clop website instructs affected companies to contact the group for negotiations. Source

July 2023: University of Hawaii pays ransom to the NoEscape group

In July, the University of Hawaii admitted to paying off ransomwarers. The incident itself occurred a month earlier when all eyes were fixed on the attacks on MOVEit. During that time, a relatively new group going by the name of NoEscape infected one of the university departments, Hawaiian Community College, with ransomware.

Having stolen 65GB of data, the attackers threatened the university with publication. The personal information of 28,000 people was apparently at risk of compromise. It was this fact that convinced the university to pay the ransom to the extortionists.

NoEscape ransomware attack on the University of Hawaii

NoEscape announces the hack of the University of Hawaii on its website. Source

Of note is that university staff had to temporarily shut down IT systems to stop the ransomware from spreading. Although the NoEscape group supplied a decryption key upon payment of the ransom, the restoration of the IT infrastructure was expected to take two months.

August 2023: Rhysida targets the healthcare sector

August was marked by a series of attacks by the Rhysida ransomware group on the healthcare sector. Prospect Medical Holdings (PMH), which operates 16 hospitals and 165 clinics across several American states, was the organization that suffered the most.

The hackers claimed to have stolen 1TB of corporate documents and a 1.3 TB SQL database containing 500,000 social security numbers, passports, driver’s licenses, patient medical records, as well as financial and legal documents. The cybercriminals demanded a 50BTC ransom (then around $1.3 million).

Rhysida demands a ransom

Ransom note from the Rhysida group. Source

September 2023: BlackCat attacks Caesars and MGM casinos

In early September, news broke of a ransomware attack on two of the biggest U.S. hotel and casino chains — Caesars and MGM — in one stroke. Behind the attacks was the ALPHV/BlackCat group, mentioned above in connection with the assault on the NCR Aloha POS platform.

The incident shut down the companies’ entire infrastructure — from hotel check-in systems to slot machines. Interestingly, the victims responded in very different ways. Caesars decided to pay the extortionists $15 million, half of the original $30 million demand.

MGM chose not to pay up, but rather to restore the infrastructure on its own. The recovery process took nine days, during which time the company lost $100 million (its own estimate), of which $10 million was direct costs related to restoring the downed IT systems.

BlackCat ransomware attacks on Caesars and MGM

Caesars and MGM own more than half of Las Vegas casinos

October 2023: BianLian group extorts Air Canada

A month later, the BianLian group targeted Canada’s flag carrier, Air Canada. The attackers claim they stole more than 210GB of various information, including employee/supplier data and confidential documents. In particular, the attackers managed to steal information on technical violations and security issues of the airline.

BianLian extorts Air Canada

The BianLian website demands a ransom from Air Canada Source

November 2023: LockBit group exploits Citrix Bleed vulnerability

November was remembered for a Citrix Bleed vulnerability exploited by the LockBit group, which we also discussed above. Although patches for this vulnerability were published a month earlier, at the time of the large-scale attack more than 10,000 publicly accessible servers remained vulnerable. This is what the LockBit ransomware took advantage of to breach the systems of several major companies, steal data, and encrypt files.

Among the big-name victims was Boeing, whose stolen data the attackers ended up publishing without waiting for the ransom to be paid. The ransomware also hit the Industrial and Commercial Bank of China (ICBC), the largest commercial bank in the world.

LockBit extorts Boeing

The LockBit website demands a ransom from Boeing

The incident badly hurt the Australian arm of DP World, a major UAE-based logistics company that operates dozens of ports and container terminals worldwide. The attack on DP World Australia’s IT systems massively disrupted its logistics operations, leaving some 30,000 containers stranded in Australian ports.

December 2023: ALPHV/BlackCat infrastructure seized by law enforcement

Toward the end of the year, a joint operation by the FBI, the U.S. Department of Justice, Europol, and law enforcement agencies of several European countries deprived the ALPHV/BlackCat ransomware group of control over its infrastructure. Having hacked it, they quietly observed the cybercriminals’ actions for several months, collecting data decryption keys and aiding BlackCat victims.

In this way, the agencies rid more than 500 organizations worldwide of the ransom threat and saved around $68 million in potential payouts. This was followed in December by a final takeover of the servers, putting an end to BlackCat’s operations.

The end of ALPHV/BlackCat activity

The joint law enforcement operation to seize ALPHV/BlackCat infrastructure. Source

Various statistics about the ransomware group’s operations were also made public. According to the FBI, during the two years of its activity, ALPHV/BlackCat breached more than a thousand organizations, demanded a total of more than $500 million from victims, and received around $300 million in ransom payments.

How to guard against ransomware attacks

Ransomware attacks are becoming more varied and sophisticated with each passing year, so there isn’t (and can’t be) one killer catch-all tip to prevent incidents. Defense measures must be comprehensive. Focus on the following tasks:

]]>
full large medium thumbnail
KeyTrap attack can take out a DNS server | Kaspersky official blog https://www.kaspersky.com/blog/keytrap-dnssec-vulnerability-dos-attack/50594/ Mon, 19 Feb 2024 09:23:52 +0000 https://www.kaspersky.com/blog/?p=50594 A group of researchers representing several German universities and institutes have discovered a vulnerability in DNSSEC, a set of extensions to the DNS protocol designed to improve its security, and primarily to counter DNS spoofing.

An attack they dubbed KeyTrap, which exploits the vulnerability, can disable a DNS server by sending it a single malicious data packet. Read on to find out more about this attack.

How KeyTrap works and what makes it dangerous

The DNSSEC vulnerability has only recently become public knowledge, but it was discovered back in December 2023 and registered as CVE-2023-50387. It was assigned a CVSS 3.1 score of 7.5, and a severity rating of “High”. Complete information about the vulnerability and the attack associated with it is yet to be published.

Here’s how KeyTrap works. The malicious actor sets up a nameserver that responds to requests from caching DNS servers – that is, those which serve client requests directly – with a malicious packet. Next, the attacker has the caching-server request a DNS record from their malicious nameserver. The record sent in response is a cryptographically-signed malicious one. The way the signature is crafted causes the attacked DNS server trying to verify it to run at full CPU capacity for a long period of time.

According to the researchers, a single such malicious packet can freeze the DNS server for anywhere from 170 seconds to 16 hours – depending on the software it runs on. The KeyTrap attack can not only deny access to web content to all clients using the targeted DNS server, but also disrupt various infrastructural services such as spam protection, digital certificate management (PKI), and secure cross-domain routing (RPKI).

The researchers refer to KeyTrap as “the worst attack on DNS ever discovered”. Interestingly enough, the flaws in the signature validation logic making KeyTrap possible were discovered in one of the earliest versions of the DNSSEC specification, published as far back as… 1999. In other words, the vulnerability is about to turn 25!

CVE-2023-50387 has been present in the DNSSEC specification since 1999

The origins of KeyTrap can be traced back to RFC-2035, the DNSSEC specification published in 1999

Fending off KeyTrap

The researchers have alerted all DNS server software developers and major public DNS providers. Updates and security advisories to fix CVE-2023-50387 are now available for PowerDNS, NLnet Labs Unbound, and Internet Systems Consortium BIND9. If you are an administrator of a DNS server, it’s high time to install the updates.

Bear in mind, though, that the DNSSEC logic issues that have made KeyTrap possible are fundamental in nature and not easily fixed. Patches released by DNS software developers can only go some way toward solving the problem, as the vulnerability is part of standard, rather than specific implementations. “If we launch [KeyTrap] against a patched resolver, we still get 100 percent CPU usage but it can still respond,” said one of the researchers.

Practical exploitation of the flaw remains a possibility, with the potential result being unpredictable resolver failures. In case this happens, corporate network administrators would do well to prepare a list of backup DNS servers in advance so they can switch as needed to keep the network functioning normally and let users browse the web resources they need unimpeded.

]]>
full large medium thumbnail
How to run language models and other AI tools locally on your computer | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-ai-locally-and-securely/50576/ Fri, 16 Feb 2024 11:08:41 +0000 https://www.kaspersky.com/blog/?p=50576 Many people are already experimenting with generative neural networks and finding regular use for them, including at work. For example, ChatGPT and its analogs are regularly used by almost 60% of Americans (and not always with permission from management). However, all the data involved in such operations — both user prompts and model responses — are stored on servers of OpenAI, Google, and the rest. For tasks where such information leakage is unacceptable, you don’t need to abandon AI completely — you just need to invest a little effort (and perhaps money) to run the neural network locally on your own computer – even a laptop.

Cloud threats

The most popular AI assistants run on the cloud infrastructure of large companies. It’s efficient and fast, but your data processed by the model may be accessible to both the AI service provider and completely unrelated parties, as happened last year with ChatGPT.

Such incidents present varying levels of threat depending on what these AI assistants are used for. If you’re generating cute illustrations for some fairy tales you’ve written, or asking ChatGPT to create an itinerary for your upcoming weekend city break, it’s unlikely that a leak will lead to serious damage. However, if your conversation with a chatbot contains confidential info — personal data, passwords, or bank card numbers — a possible leak to the cloud is no longer acceptable. Thankfully, it’s relatively easy to prevent by pre-filtering the data — we’ve written a separate post about that.

However, in cases where either all the correspondence is confidential (for example, medical or financial information), or the reliability of pre-filtering is questionable (you need to process large volumes of data that no one will preview and filter), there’s only one solution: move the processing from the cloud to a local computer. Of course, running your own version of ChatGPT or Midjourney offline is unlikely to be successful, but other neural networks working locally provide comparable quality with less computational load.

What hardware do you need to run a neural network?

You’ve probably heard that working with neural networks requires super-powerful graphics cards, but in practice this isn’t always the case. Different AI models, depending on their specifics, may be demanding on such computer components as RAM, video memory, drive, and CPU (here, not only the processing speed is important, but also the processor’s support for certain vector instructions). The ability to load the model depends on the amount of RAM, and the size of the “context window” — that is, the memory of the previous conversation — depends on the amount of video memory. Typically, with a weak graphics card and CPU, generation occurs at a snail’s pace (one to two words per second for text models), so a computer with such a minimal setup is only appropriate for getting acquainted with a particular model and evaluating its basic suitability. For full-fledged everyday use, you’ll need to increase the RAM, upgrade the graphics card, or choose a faster AI model.

As a starting point, you can try working with computers that were considered relatively powerful back in 2017: processors no lower than Core i7 with support for AVX2 instructions, 16GB of RAM, and graphics cards with at least 4GB of memory. For Mac enthusiasts, models running on the Apple M1 chip and above will do, while the memory requirements are the same.

When choosing an AI model, you should first familiarize yourself with its system requirements. A search query like “model_name requirements” will help you assess whether it’s worth downloading this model given your available hardware. There are detailed studies available on the impact of memory size, CPU, and GPU on the performance of different models; for example, this one.

Good news for those who don’t have access to powerful hardware — there are simplified AI models that can perform practical tasks even on old hardware. Even if your graphics card is very basic and weak, it’s possible to run models and launch environments using only the CPU. Depending on your tasks, these can even work acceptably well.

GPU throughput tests

Examples of how various computer builds work with popular language models

Choosing an AI model and the magic of quantization

A wide range of language models are available today, but many of them have limited practical applications. Nevertheless, there are easy-to-use and publicly available AI tools that are well-suited for specific tasks, be they generating text (for example, Mistral 7B), or creating code snippets (for example, Code Llama 13B). Therefore, when selecting a model, narrow down the choice to a few suitable candidates, and then make sure that your computer has the necessary resources to run them.

In any neural network, most of the memory strain is courtesy of weights — numerical coefficients describing the operation of each neuron in the network. Initially, when training the model, the weights are computed and stored as high-precision fractional numbers. However, it turns out that rounding the weights in the trained model allows the AI tool to be run on regular computers while only slightly decreasing the performance. This rounding process is called quantization, and with its help the model’s size can be reduced considerably — instead of 16 bits, each weight might use eight, four, or even two bits.

According to current research, a larger model with more parameters and quantization can sometimes give better results than a model with precise weight storage but fewer parameters.

Armed with this knowledge, you’re now ready to explore the treasure trove of open-source language models, namely the top Open LLM leaderboard. In this list, AI tools are sorted by several generation quality metrics, and filters make it easy to exclude models that are too large, too small, or too accurate.

List of language models sorted by filter set

List of language models sorted by filter set

After reading the model description and making sure it’s potentially a fit for your needs, test its performance in the cloud using Hugging Face or Google Colab services. This way, you can avoid downloading models which produce unsatisfactory results, saving you time. Once you’re satisfied with the initial test of the model, it’s time to see how it works locally!

Required software

Most of the open-source models are published on Hugging Face, but simply downloading them to your computer isn’t enough. To run them, you have to install specialized software, such as LLaMA.cpp, or — even easier — its “wrapper”, LM Studio. The latter allows you to select your desired model directly from the application, download it, and run it in a dialog box.

Another “out-of-the-box” way to use a chatbot locally is GPT4All. Here, the choice is limited to about a dozen language models, but most of them will run even on a computer with just 8GB of memory and a basic graphics card.

If generation is too slow, then you may need a model with coarser quantization (two bits instead of four). If generation is interrupted or execution errors occur, the problem is often insufficient memory — it’s worth looking for a model with fewer parameters or, again, with coarser quantization.

Many models on Hugging Face have already been quantized to varying degrees of precision, but if no one has quantized the model you want with the desired precision, you can do it yourself using GPTQ.

This week, another promising tool was released to public beta: Chat With RTX from NVIDIA. The manufacturer of the most sought-after AI chips has released a local chatbot capable of summarizing the content of YouTube videos, processing sets of documents, and much more — provided the user has a Windows PC with 16GB of memory and an NVIDIA RTX 30th or 40th series graphics card with 8GB or more of video memory. “Under the hood” are the same varieties of Mistral and Llama 2 from Hugging Face. Of course, powerful graphics cards can improve generation performance, but according to the feedback from the first testers, the existing beta is quite cumbersome (about 40GB) and difficult to install. However, NVIDIA’s Chat With RTX could become a very useful local AI assistant in the future.

The code for the game "Snake", written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The code for the game “Snake”, written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The applications listed above perform all computations locally, don’t send data to servers, and can run offline so you can safely share confidential information with them. However, to fully protect yourself against leaks, you need to ensure not only the security of the language model but also that of your computer – and that’s where our comprehensive security solution comes in. As confirmed in independent tests, Kaspersky Premium has practically no impact on your computer’s performance — an important advantage when working with local AI models.

]]>
full large medium thumbnail