smartphones – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Wed, 28 Feb 2024 12:15:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png smartphones – Kaspersky official blog https://www.kaspersky.com/blog 32 32 VoltSchemer: attacks on wireless chargers through the power supply | Kaspersky official blog https://www.kaspersky.com/blog/voltschemer-attack-wireless-chargers/50710/ Wed, 28 Feb 2024 12:15:56 +0000 https://www.kaspersky.com/blog/?p=50710 A group of researchers from the University of Florida has published a study on a type of attack using Qi wireless chargers, which they’ve dubbed VoltSchemer. In the study, they describe in detail how these attacks work, what makes them possible, and what results they’ve achieved.

In this post, first we’ll discuss the researchers’ main findings. Then we’ll explore what it all means practically speaking — and whether you should be concerned about someone roasting your smartphone through a wireless charger.

The main idea behind the VoltSchemer attacks

The Qi standard has become the dominant one in its field: it’s supported by all the latest wireless chargers and smartphones capable of wireless charging. VoltSchemer attacks exploit two fundamental features of the Qi standard.

The first is the way the smartphone and wireless charger exchange information to coordinate the battery charging process: the Qi standard has a communication protocol that uses the only “thing” connecting the charger and the smartphone — a magnetic field — to transmit messages.

The second feature is the way that wireless chargers are intended for anyone to freely use. That is, any smartphone can be placed on any wireless charger without any kind of prior pairing, and the battery will start charging immediately. Thus, the Qi communication protocol involves no encryption — all commands are transmitted in plain text.

It is this lack of encryption that makes communication between charger and smartphone susceptible to man-in-the-middle attacks; that is, said communication can be intercepted and tampered with. That, coupled with the first feature (use of the magnetic field), means such tampering  is not even that hard to accomplish: to send malicious commands, attackers only need to be able to manipulate the magnetic field to mimic Qi-standard signals.

VoltSchemer attack: malicious power adapter

To illustrate the attack, the researchers created a malicious power adapter: an overlay on a regular wall USB socket. Source

And that’s exactly what the researchers did: they built a “malicious” power adapter disguised as a wall USB socket, which allowed them to create precisely tuned voltage noise. They were able to send their own commands to the wireless charger, as well as block Qi messages sent by the smartphone.

Thus, VoltSchemer attacks require no modifications to the wireless charger’s hardware or firmware. All that’s necessary is to place a malicious power source in a location suitable for luring unsuspecting victims.

Next, the researchers explored all the ways potential attackers could exploit this method. That is, they considered various possible attack vectors and tested their feasibility in practice.

VoltSchemer attack: general outline and attack vectors

VoltSchemer attacks don’t require any modifications to the wireless charger itself — a malicious power source is enough. Source

1. Silent commands to Siri and Google Assistant voice assistants

The first thing the researchers tested was the possibility of sending silent voice commands to the built-in voice assistant of the charging smartphone through the wireless charger. They copied this attack vector from their colleagues at Hong Kong Polytechnic University, who dubbed this attack Heartworm.

Heartworm attack: the general idea

The general idea of the Heartworm attack is to send silent commands to the smartphone’s voice assistant using a magnetic field. Source

The idea here is that the smartphone’s microphone converts sound into electrical vibrations. It’s therefore possible to generate these electrical vibrations in the microphone directly using electricity itself rather than actual sound. To prevent this from happening, microphone manufacturers use electromagnetic shielding — Faraday cages. However, there’s a key nuance here: although these shields are good at suppressing the electrical component, they can be penetrated by magnetic fields.

Smartphones that can charge wirelessly are typically equipped with a ferrite screen, which protects against magnetic fields. However, this screen is located right next to the induction coil, and so doesn’t cover the microphone. Thus, today’s smartphone microphones are quite vulnerable to attacks from devices capable of manipulating magnetic fields — such as wireless chargers.

Heartworm attack: lack of protection in today's smartphones

Microphones in today’s smartphones aren’t protected from magnetic field manipulation. Source

The creators of VoltSchemer expanded the already known Heartworm attack with the ability to affect the microphone of a charging smartphone using a “malicious” power source. The authors of the original attack used a specially modified wireless charger for this purpose.

2. Overheating a charging smartphone

Next, the researchers tested whether it’s possible to use the VoltSchemer attack to overheat a smartphone charging on the compromised charger. Normally, when the battery reaches the required charge level or the temperature rises to a threshold value, the smartphone sends a command to stop the charging process.

However, the researchers were able to use VoltSchemer to block these commands. Without receiving the command to stop, the compromised charger continues to supply energy to the smartphone, gradually heating it up — and the smartphone can’t do anything about it. For cases such as this, smartphones have emergency defense mechanisms to avoid overheating: first, the device closes applications, and if that doesn’t help it shuts down completely.

VoltSchemer attack: overheating the charging smartphone

Using the VoltSchemer attack, researchers were able to heat a smartphone on a wireless charger to a temperature of 178°F — approximately 81°C. Source

Thus, the researchers were able to heat a smartphone up to a temperature of 81°C (178°F), which is quite dangerous for the battery — and in certain circumstances could lead to its catching fire (which could of course lead to other things catching fire if the charging phone is left unattended).

3. “Frying” other stuff

Next, the researchers explored the possibility of “frying” various other devices and everyday items. Of course, under normal circumstances, a wireless charger shouldn’t activate unless it receives a command from the smartphone placed on it. However, with the VoltSchemer attack, such a command can be given at any time, as well as a command to not stop charging.

Now, take a guess what will happen to any items lying on the charger at that moment! Nothing good, that’s for sure. For example, the researchers were able to heat a paperclip to a temperature of 280°C (536°F) — enough to set fire to any attached documents. They also managed to fry to death a car key, a USB flash drive, an SSD drive, and RFID chips embedded in bank cards, office passes, travel cards, biometric passports and other such documents.

VoltSchemer attack: frying external objects and devices

Also using the VoltSchemer attack, researchers were able to disable car keys, a USB flash drive, an SSD drive, and several cards with RFID chips, as well as heat a paperclip to a temperature of 536°F — 280°C. Source

In total, the researchers examined nine different models of wireless chargers available in stores, and all of them were vulnerable to VoltSchemer attacks. As you might guess, the models with the highest power pose the greatest danger, as they have the most potential to cause serious damage and overheat smartphones.

Should you fear a VoltSchemer attack in real life?

Protecting against VoltSchemer attacks is fairly straightforward: simply avoid using public wireless chargers and don’t connect your own wireless charger to any suspicious USB ports or power adapters.

While VoltSchemer attacks are quite interesting and can have spectacular results, their real-world practicality is highly questionable. Firstly, such an attack is very difficult to organize. Secondly, it’s not exactly clear what the benefits to an attacker would be — unless they’re a pyromaniac, of course.

But what this research clearly demonstrates is how inherently dangerous wireless chargers can be — especially the more powerful models. So, if you’re not completely sure of the reliability and safety of a particular wireless charger, you’d be wise to avoid using it. While wireless charger hacking is unlikely, the danger of your smartphone randomly getting roasted due to a “rogue” charger that no longer responds to charging commands isn’t entirely absent.

]]>
full large medium thumbnail
Using ambient light sensor for spying | Kaspersky official blog https://www.kaspersky.com/blog/ambient-light-sensor-privacy/50473/ Mon, 05 Feb 2024 18:27:02 +0000 https://www.kaspersky.com/blog/?p=50473 An article in Science Magazine published mid-January describes a non-trivial method of snooping on smartphone users through an ambient light sensor. All smartphones and tablets have this component built-in — as do many laptops and TVs. Its primary task is to sense the amount of ambient light in the environment the device finds itself in, and to alter the brightness of the display accordingly.

But first we need to explain why a threat actor would use a tool ill-suited for capturing footage instead of the target device’s regular camera. The reason is that such “ill-suited” sensors are usually totally unprotected. Let’s imagine an attacker tricked a user into installing a malicious program on their smartphone. The malware will struggle to gain access to oft-targeted components, such as the microphone or camera. But to the light sensor? Easy as pie.

So, the researchers proved that this ambient light sensor can be used instead of a camera; for example, to get a snapshot of the user’s hand entering a PIN on a virtual keyboard. In theory, by analyzing such data, it’s possible to reconstruct the password itself. This post explains the ins and outs in plain language.

“Taking shots” with a light sensor. Source

A light sensor is a rather primitive piece of technology. It’s a light-sensitive photocell for measuring the brightness of ambient light several times per second. Digital cameras use very similar (albeit smaller) light sensors, but there are many millions of them. The lens projects an image onto this photocell matrix, the brightness of each element is measured, and the result is a digital photograph. Thus, you could describe a light sensor as the most primitive digital camera there is: its resolution is exactly one pixel. How could such a thing ever capture what’s going on around the device?

The researchers used the Helmholtz reciprocity principle, formulated back in the mid-19th century. This principle is widely used in computer graphics, for example, where it greatly simplifies calculations. In 2005, the principle formed the basis of the proposed dual photography method. Let’s take an illustration from this paper to help explain:

On the left is a real photograph of the object. On the right is an image calculated from the point of view of the light source.

On the left is a real photograph of the object. On the right is an image calculated from the point of view of the light source. Source

Imagine you’re photographing objects on a table. A lamp shines on the objects, the reflected light hits the camera lens, and the result is a photograph. Nothing out of the ordinary. In the illustration above, the image on the left is precisely that — a regular photo. Next, in greatly simplified terms, the researchers began to alter the brightness of the lamp and record the changes in illumination. As a result, they collected enough information to reconstruct the image on the right — taken as if from the point of view of the lamp. There’s no camera in this position and never was, but based on the measurements, the scene was successfully reconstructed.

Most interesting of all is that this trick doesn’t even require a camera. A simple photoresistor will do… just like the one in an ambient light sensor. A photoresistor (or “single-pixel camera”) measures changes in the light reflected from objects, and this data is used to construct a photograph of them. The quality of the image will be low, and many measurements must be taken — numbering in the hundreds or thousands.

Experimental setup

Experimental setup: a Samsung Galaxy View tablet and a mannequin hand. Source

Let’s return to the study and the light sensor. The authors of the paper used a fairly large Samsung Galaxy View tablet with a 17-inch display. Various patterns of black and white rectangles were displayed on the tablet’s screen. A mannequin was positioned facing the screen in the role of a user entering something on the on-screen keyboard. The light sensor captured changes in brightness. In several hundred measurements like this, an image of the mannequin’s hand was produced. That is, the authors applied the Helmholtz reciprocity principle to get a photograph of the hand, taken as if from the point of view of the screen. The researchers effectively turned the tablet display into an extremely low-quality camera.

Comparing real objects in front of the tablet with what the light sensor captured.

Comparing real objects in front of the tablet with what the light sensor captured. Source

True, not the sharpest image. The above-left picture shows what needed to be captured: in one case, the open palm of the mannequin; in the other, how the “user” appears to tap something on the display. The images in the center are a reconstructed “photo” at 32×32 pixel resolution, in which almost nothing is visible — too much noise in the data. But with the help of machine-learning algorithms, the noise was filtered out to produce the images on the right, where we can distinguish one hand position from the other. The authors of the paper give other examples of typical gestures that people make when using a tablet touchscreen. Or rather, examples of how they managed to “photograph” them:

Capturing various hand positions using a light sensor.

Capturing various hand positions using a light sensor. Source

So can we apply this method in practice? Is it possible to monitor how the user interacts with the touchscreen of a tablet or smartphone? How they enter text on the on-screen keyboard? How they enter credit card details? How they open apps? Fortunately, it’s not that straightforward. Note the captions above the “photographs” in the illustration above. They show how slow this method works. In the best-case scenario, the researchers were able to reconstruct a “photo” of the hand in just over three minutes. The image in the previous illustration took 17 minutes to capture. Real-time surveillance at such speeds is out of the question. It’s also clear now why most of the experiments featured a mannequin’s hand: a human being simply can’t hold their hand motionless for that long.

But that doesn’t rule out the possibility of the method being improved. Let’s ponder the worst-case scenario: if each hand image can be obtained not in three minutes, but in, say, half a second; if the on-screen output is not some strange black-and-white figures, but a video or set of pictures or animation of interest to the user; and if the user does something worth spying on… — then the attack would make sense. But even then — not much sense. All the researchers’ efforts are undermined by the fact that if an attacker managed to slip malware onto the victim’s device, there are many easier ways to then trick them into entering a password or credit card number. Perhaps for the first time in covering such papers (examples: one, two, three, four), we are struggling even to imagine a real-life scenario for such an attack.

All we can do is marvel at the beauty of the proposed method. This research serves as another reminder that the seemingly familiar, inconspicuous devices we are surrounded by can harbor unusual, lesser-known functionalities. That said, for those concerned about this potential violation of privacy, the solution is simple. Such low-quality images are due to the fact that the light sensor takes measurements quite infrequently: 10–20 times per second. The output data also lacks precision. However, that’s only relevant for turning the sensor into a camera. For the main task — measuring ambient light — this rate is even too high. We can “coarsen” the data even more — transmitting it, say, five times per second instead of 20. For matching the screen brightness to the level of ambient light, this is more than enough. But spying through the sensor — already improbable — would become impossible. Perhaps for the best.

]]>
full large medium thumbnail
Restricted Settings in Android 13 and 14 | Kaspersky official blog https://www.kaspersky.com/blog/android-restricted-settings/49991/ Tue, 05 Dec 2023 12:38:45 +0000 https://www.kaspersky.com/blog/?p=49991 With each new version of the Android operating system, new features are added to protect users from malware. For example, Android 13 introduced Restricted Settings. In this post, we’ll discuss what this feature involves, what it’s designed to protect against, and how effectively it does its job (spoiler: not very well).

What are Restricted Settings?

How do Restricted Settings operate? Imagine you’re installing an application from a third-party source — that is, downloading an APK file from somewhere and initiating its installation. Let’s suppose this application requires access to certain functions that Google considers particularly dangerous (and for good reason — but more on that later). In this case, the application will ask you to enable the necessary functions for it in your operating system settings.

However, in both Android 13 and 14, this isn’t possible for applications installed by users from APK files. If you go to your smartphone’s settings and try to grant dangerous permissions to such an application, a window titled Restricted Settings will appear. It will say “For your security, this setting is currently unavailable”.

Restricted Settings pop-up window

When an application installed from third-party sources requests dangerous permissions, a window pops up with the title Restricted Settings

So, which permissions does Google consider so hazardous that access to them is blocked for any applications not downloaded from the store? Unfortunately, Google isn’t rushing to share this information. We therefore have to figure it out from independent publications for Android developers. At present, two such restrictions are known:

It’s possible that this list will change in future versions of Android. But for now it seems that these are all the permissions that Google has decided to restrict for applications downloaded from unknown sources. Now let’s discuss why this is even necessary.

Why Google considers Accessibility dangerous

We previously talked about Accessibility in a recent post titled the Top-3 most dangerous Android features. In short, Accessibility constitutes a set of Android features designed to assist people with severe visual impairments.

The initial idea was that Accessibility would enable applications to act as mediators between the visual interface of the operating system and individuals unable to use this interface but capable of issuing commands and receiving information through alternative means — typically by voice. Thus, Accessibility serves as a guide dog in the virtual space.

An application using Accessibility can see everything happening on the Android device’s screen, and perform any action on the user’s behalf — pressing buttons, inputting data, changing settings, and more.

This is precisely why the creators of malicious Android applications are so fond of Accessibility. This set of functions enables them to do a great deal of harm: spy on correspondence, snoop on passwords, steal financial information, intercept one-time transaction confirmation codes, and so on. Moreover, Accessibility also allows malware to perform user actions within other applications. For example, it can make a transfer in a banking app and confirm the transaction using the one-time code from a text message.

This is why Google deems the permission to access Accessibility particularly perilous — and rightly so. For apps available on Google Play, their use is subject to careful scrutiny by moderators. As for programs downloaded from unknown sources, Android developers have attempted to completely disable access to this set of functions.

Why Google restricts access to notifications

We’ve covered Accessibility, so now let’s talk about what’s wrong with applications accessing notifications (in Android, this function is called Notification Listener). The danger lies in the fact that notifications may contain a lot of personal information about the user.

For example, with access to all notifications, a malicious app can read almost all of the user’s incoming correspondence. In particular, it can intercept messages containing one-time codes for confirming bank transactions, logging in to various services (such as messengers), changing passwords, and so on.

Here, two serious threats arise. Firstly, an app with access to Notification Listener has a simple and convenient way to monitor the user — very useful for spyware.

Secondly, a malicious app can use the information obtained from notifications to hijack user accounts. And all this without any extra tricks, complex technical gimmicks, or expensive vulnerabilities — just exploiting Android’s built-in capabilities.

It’s not surprising that Google considers access to notifications no less dangerous than access to Accessibility, and attempts to restrict it for programs downloaded from outside the app stores.

How Android malware bypasses Restricted Settings

In both Android 13 and 14, the mechanism to protect against the use of dangerous functions by malicious apps downloaded from unknown sources operates as follows. App stores typically use the so-called session-based installation method. Apps installed using this method are considered safe by the system, no restrictions are placed on them, and users can grant these apps access to Accessibility and Notification Listener.

However, if an app is installed without using the session-based method — which is very likely to happen when a user manually downloads an APK — it’s deemed unsafe, and the Restricted Settings function is enabled for it.

Hence the bypass mechanism: even if a malicious app downloaded from an untrusted source cannot access Accessibility or notifications, it can use the session-based method to install another malicious app! It will be considered safe, and access restrictions won’t be activated.

We’re not talking theory here – this is a real problem: malware developers have already learned to bypass the Restricted Settings mechanism in the latest versions of their creations. Therefore, the restrictions in Android 13 and 14 will only combat malware that’s old — not protect against new malware.

How to disable Restricted Settings when installing an app from third-party sources

Even though it’s not safe, sometimes a user might need to grant access to Accessibility or Notification Listener to an app downloaded from outside the store. We recommend extreme caution in this case, and strongly advise scanning such an application with a reliable antivirus before installing it.

To disable the restrictions:

  • Open your smartphone settings
  • Go to the Apps section
  • Select the app you want to remove access restrictions for
  • In the upper right corner, tap on the three dots icon
  • Select Allow restricted settings

That’s it! Now, the menu option that lets you grant the app the necessary permissions will become active.

How to protect your Android smartphone

Since you can’t rely on Restricted Settings, you’ll have to use other methods to protect yourself from malware that abuses access to Accessibility or notifications:

  • Be wary of any apps requesting access to these features — we’ve discussed above why this is very dangerous
  • Try to install applications from official stores. Sometimes malware can still be found in them, but the risk is much lower than the chance of picking up trojans from obscure sites on the internet
  • If you really have to install an app from an unreliable source, remember to disable this option immediately after installation
  • Scan all applications you install with a reliable mobile antivirus.
  • If you’re using the free version of our protection tool, remember to do this manually before launching each new application. In the paid version of Kaspersky: Antivirus & VPN, this scan runs automatically.
]]>
full large medium thumbnail
Three most dangerous Android features | Kaspersky official blog https://www.kaspersky.com/blog/android-most-dangerous-features/49418/ Tue, 24 Oct 2023 13:03:47 +0000 https://www.kaspersky.com/blog/?p=49418 Android is a well-designed operating system that gets better and more secure with each new version. However, there are several features that may put your smartphone or tablet at serious risk of infection. Today, we take a look at the three that are the most dangerous of all — and how to minimize the risks when using them.

Accessibility

Accessibility is an extremely powerful set of Android features originally designed for people with severe visual impairments. To use smartphones, they need special apps that read on-screen text aloud, and respond to voice commands and convert them into taps on UI controls.

For those with visual impairments, this function is not just useful — it’s essential. But the very modus operandi of Accessibility is to grant an app access to everything that’s going on in others. This violates the principle of strict isolation, which is a core security feature of Android.

And it’s not just tools for helping the visually impaired that take advantage of the Accessibility feature. For example, mobile antiviruses often use it to keep an eye out for anything suspicious taking place in other apps.

But every coin has a flip side. For example, malicious apps can requests permission to access this feature set too. This isn’t surprising, since such access makes it easy to spy on everything on your smartphone: read messages, steal credentials and financial data, intercept one-time transaction confirmation codes, and so on.

What’s more, access to this feature allows cybercriminals to perform user actions on the smartphone, such as tapping buttons and filling out forms. For instance, malware can fill out a transfer form in a banking app and confirm it with a one-time code from a text message, all on its own.

Therefore, before you give an app access to Accessibility, always think carefully: do you really trust its developers?

Install unknown apps

By default, only the official store app has the right to install other programs on Android. Given an unmodified version of the system, this is, of course, Google Play. But together with (or instead of) Google Play, smartphone developers often use their own — such as Huawei AppGallery or Samsung Galaxy Store. Indeed, Android is a democratic operating system with no strict limitations on app download sources. You can easily allow any app to download and install programs from anywhere. But it’s just as easy to get your smartphone infected with something nasty this way too, which is why we don’t recommend using it.

Official stores are usually the safest sources for downloading apps. Before being published in an official store, apps are subjected to security checks. And if it later transpires that malware has sneaked in, the dangerous app is quickly kicked out of the store.

Sure, even Google Play is not totally immune to malware (alas, it gets in more often than we’d like). Still, official stores at least try to keep their house in order — unlike third-party sites where malware is endemic, and the owners couldn’t care less. A case in point: attackers once even managed to infect the third-party Android app store itself.

The most important thing to remember is this: if you do decide you absolutely must download and install something on your Android smartphone not from the official app store — don’t forget to disable the ability to do so immediately after the installation. It’s also a good idea to scan your device afterward with a mobile antivirus to make sure no malware’s appeared; the free version of our Kaspersky: Antivirus & VPN will do the job just fine.

Superuser rights (rooting)

Less popular than the two features above — but by no means less dangerous — is the ability to gain superuser rights in Android. This process is popularly known as “rooting” (“root” is the name given to the superuser account in Linux).

The designation is appropriate since superuser rights give superpowers to anyone who gets them on the device. For the user, they open up the usually forbidden depths of Android. Superuser rights grant full access to the file system, network traffic, smartphone hardware, installation of any firmware, and much more.

Again, there’s a downside: if malware gets on a rooted smartphone, it too acquires superpowers. For this reason, rooting is a favored method of sophisticated spyware apps used by many government intelligence agencies — as well as cutting-edge stalkerware that’s accessible to regular users.

Therefore, we strongly discourage rooting your Android smartphone or tablet — unless you’re an expert with a clear understanding of how the operating system works.

How Android users can stay safe

Lastly, a few tips on how to stay safe:

  • Be wary of apps that request access to Accessibility.
  • Try to install apps only from official stores. Yes, you can come across malware there too, but it’s still much safer than using alternative sites where no one is responsible for security.
  • If you do install an app from a third-party source, don’t forget to disable “Install unknown apps” immediately afterward.
  • Never use rooted Android unless you fully understand how root permissions work.
  • Make sure you install reliable protection on all your Android devices.
  • If you use the free version of our security solution, remember to manually run a scan from time to time. In the paid version of Kaspersky: Antivirus & VPN, scanning takes place automatically.
]]>
full large medium thumbnail
Brute-forcing a fingerprint-protected smartphone | Kaspersky official blog https://www.kaspersky.com/blog/fingerprint-brute-force-android/48303/ Wed, 31 May 2023 11:13:09 +0000 https://www.kaspersky.com/blog/?p=48303 Fingerprint recognition is believed to be a fairly secure authentication method. Publications on different ways to trick the fingerprint sensor do pop up now and again, but all the suggested methods one way or another boil down to physical imitation of the phone owner’s finger — whether using a silicone pad or conductive ink printout. This involves procuring a high-quality image of a finger — and not any finger, mind, but the one registered in the system.

In a nutshell, all these methods come with lots of real-world hassle. But is it possible to do it somehow more elegantly, without leaving the purely digital world and all its benefits? Turns out, it is: Chinese researchers Yu Chen and Yiling He recently published a study on how to brute-force almost any fingerprint-protected Android smartphone. They called the attack BrutePrint.

How unique are fingerprints?

Before we get to investigate our Chinese comrades’ work, briefly — some background theory… To begin with, and you may know this, but fingerprints are truly unique and never alter with age.

Now, way back in 1892, English scientist Sir Francis Galton published a work laconically entitled Finger Prints. In it, he summarized the then-current scientific data on fingerprints, and Galton’s work laid the theoretical foundation for further practical use of fingerprints in forensics.

Among other things, Sir Francis Galton calculated that fingerprint match probability was “less than 236, or one to about sixty-four thousand million”. Forensic experts stick with this value even to this day.

By the way, if you’re into hardcore anatomy or the biological factors behind the uniqueness of fingerprints, here’s a new research paper on the subject.

How reliable are fingerprint sensors?

Sir Francis’s work and all that stemmed from it, however, relates to the (warm) analog world, covering things like the taking of fingerprints, matching them to those left at, say, a crime scene, and Bob’s your uncle. But things are somewhat different in the (cold) digital reality. The quality of digital fingerprint representation depends on multiple factors: type of sensor, its size and resolution, and — in no small measure — “image” post-processing and matching algorithms.

Comparing a digital fingerprint captured by an optical sensor to an analog fingerprint copy

Fingerprints as they were seen by Sir Francis Galton 150 year ago (left), and by your cutting-edge smartphone’s optical sensor (right). Source and Source

And, of course, the developer needs to make the device dirt-cheap (or no one will buy it), achieve split-second authentication (or get overwhelmed by complaints about slow speed), and avoid false negatives at all costs (or the user will discard the whole thing altogether). The result is not very accurate authentication systems.

So when referring to sensors used in smartphones, much less optimistic figures are quoted for fingerprint fragment match probability than the famous 1 to 64 billion. For example, Apple estimates the probability for Touch ID at 1 to 50,000. So it can be assumed that for budget-friendly sensor models the probability will shrink further by an order or two.

This takes us from billions to thousands. Which is already within reach for brute-forcing. So, the potential hacker is only one obstacle away from the prize: the limit on the number of fingerprint recognition attempts. Normally only five of them are allowed, followed by a prolonged fingerprint authentication lockout period.

Can this obstacle be overcome? Yu Chen and Yiling He give an affirmative reply to that in their study.

BrutePrint: preparing to brute-force fingerprint-protected Android smartphones

The researcher’s method is based on a flaw in Android smartphones’ generic fingerprint sensor implementation: none of the tested models encrypted the communication channel between the sensor and the system. This opens up the opportunity for an MITM attack on the authentication system: with a device connected to the smartphone via the motherboard’s SPI port, one can both intercept incoming messages from the fingerprint sensor, and send one’s own messages by emulating the fingerprint sensor.

The researchers built such a device (pseudo-sensor) and supplemented it with a gadget for automatic clicking on the smartphone’s sensor screen. Thus the hardware component part was set up to feed multiple fingerprint images to smartphones in automatic mode.

Device used for the BrutePrint attack

Device for brute-forcing the fingerprint authentication system. Source

From there, they proceeded to prepare fingerprint specimens for brute-forcing. The researchers don’t disclose the source of their fingerprint database, confining themselves to general speculation as to how the attackers might get it (research collections, leaked data, own database).

As a next step, the fingerprint database was submitted to an AI to generate something like a fingerprint dictionary to maximize brute-forcing performance. Fingerprint images were adapted by AI to match those generated by the sensors installed on the smartphones participating in the study.

Examples of images generated by fingerprint sensors of different types

Images returned by different types of fingerprint sensors are quite different from one another. Source

The two vulnerabilities at the bottom of BrutePrint: Cancel-After-Match-Fail and Match-After-Lock

The BrutePrint attack exploits two vulnerabilities. The researchers discovered them in the basic logic of the fingerprint authentication framework which, from the looks of it, comes with all Android smartphones without exception. The vulnerabilities were called Cancel-After-Match-Fail and Match-After-Lock.

The Cancel-After-Match-Fail vulnerability

Cancel-After-Match-Fail (CAMF) exploits two important features of the fingerprint authentication mechanism. The first is the fact that it relies on multisampling, meaning that each authentication attempt uses not just one but a series of two to four fingerprint images (depending on the smartphone model). The second is the fact that, in addition to fail, an authentication attempt can also result in error — and in this case, there’s a return to the start.

This allows sending a series of images ending in a frame pre-edited to trigger an error. Thus, if one of the images in the series triggers a match, a successful authentication will take place. If not, the cycle will end in an error, after which a new series of images can be submitted without wasting the precious attempt.

Cancel-After-Match-Fail fingerprint authentication logic vulnerability diagram

How Cancel-After-Match-Fail works: error gets you back to the starting point without wasting an attempt. Source

The Match-After-Lock vulnerability

The second vulnerability is Match-After-Lock (MAL). The fingerprint authentication logic provides for a lockout period following a failed attempt, but many smartphone vendors fail to correctly implement this feature in their Android versions. So even though successful fingerprint authentication is not possible in lockout mode, one can still submit more and more new images, to which the system will still respond with an honest ‘true’ of ‘false’ answer. That is, once you detect the correct image, you can use it as soon as the system is out of lockout, thus completing a successful authentication.

Attacks exploiting Cancel-After-Match-Fail and Match-After-Lock

The attack exploiting the first vulnerability was successful for all the tested smartphones with genuine Android onboard, but for some reason it didn’t work with HarmonyOS. Match-After-Lock was exploited on Vivo and Xiaomi smartphones as well as on both Huawei phones running HarmonyOS.

Table of vulnerability of various smartphones to Cancel-After-Match-Fail and Match-After-Lock

All the tested smartphones turned out to be vulnerable to at least one attack. Source

All Android and HarmonyOS smartphones participating in the study were found to be vulnerable to at least one of the described attacks. This means that all of them allowed an indefinite number of malicious fingerprint authentication attempts.

According to the study, it took from 2.9 to 13.9 hours to hack an Android smartphone authentication system with only one fingerprint registered. But for smartphones with the maximum possible number of registered fingerprints for a given model (four for Samsung, five for all the others), the time was greatly reduced: hacking them took from 0.66 to 2.78 hours.

Smartphone hack time using BrutePrint

Successful BrutePrint attack probability as a function of spent time: one registered fingerprint (solid line) and maximum number of registered fingerprints (dashed line). Source

What about iPhones?

The Touch ID system used in iPhones turned out to be more resistant to BrutePrint. According to the study, the iPhone’s main advantage is that the communication between the fingerprint sensor and the rest of the system is encrypted. So there’s no way to intercept or to feed the system a prepared fingerprint on a device equipped with Touch ID.

The study points out that iPhones can be partially vulnerable to manipulations used to maximize the number of possible fingerprint recognition attempts. However, it’s not as bad as it may sound: while Android smartphones allow the party to last forever on and on, in iPhones the number of attempts can only be increased from 5 to 15.

So iOS users can sleep peacefully: Touch ID is much more reliable than the fingerprint authentication used in both Android and HarmonyOS. On top of that, nowadays most iPhone models use Face ID anyway.

How dangerous is all this?

Android smartphone owners shouldn’t be too worried about BrutePrint either — in practice the attack hardly poses a major threat. There are several reasons for this:

  • BrutePrint requires physical access to the device. This factor alone reduces the probability of anything like it happening to you by a great margin.
  • Moreover, to pull off the attack one needs to open the device and make use of a specific connector on the motherboard. Doing that without the knowledge of the owner is hardly easy.
  • Even in the best case scenario, the attack will require considerable time — measured in hours.
  • And, of course, BrutePrint requires a peculiar setup — both hardware and software wise — including custom equipment, a fingerprint database, and trained AI.

Combined, these factors make it extremely unlikely that such an attack could be used in real life — unless some entrepreneurially-minded folks build an easy-to-use commercial product based on the study.

Protecting Android smartphones against fingerprint brute-forcing

If, despite the foregoing, you believe you could fall victim to such an attack, here are a few tips on how to protect yourself:

  • Register as few fingerprints as possible (ideally just one). The more fingers you use for authentication, the more vulnerable the system becomes to the described tactic as well as other attacks.
  • Don’t forget to use an extra PIN or password protection for apps that have this option.
  • By the way, the AppLock function available in the paid version of Kaspersky for Android allows using separate passwords for any of your apps.
]]>
full large medium thumbnail
How the A-GPS in your smartphone works, and whether Qualcomm is tracking you | Kaspersky official blog https://www.kaspersky.com/blog/gps-agps-supl-tracking-protection/48175/ Tue, 16 May 2023 07:47:07 +0000 https://www.kaspersky.com/blog/?p=48175 News that Qualcomm, a leading vendor of smartphone chips, tracked users with its geolocation service caused a minor stir in the tech press recently. In this post we’ll separate the truth from the nonsense in that story, and discuss how you can actually minimize undesired geolocation tracking. First things first, let’s look at how geopositioning actually works.

How mobile devices determine your location

The traditional geolocation method is to receive a satellite signal from GPS, GLONASS, Galileo, or Beidou systems. Using this data, the receiver (the chip in the smartphone or navigation device) performs calculations and pins down its location. This is a fairly accurate method that doesn’t involve the transmission of any information by the device — only reception. But there are significant drawbacks to this geolocation method: it doesn’t work indoors, and it takes a long time if the receiver isn’t used daily. This is because the device needs to know the exact location of the satellites to be able to perform the calculation, so it has to download the so-called almanac, which contains information about satellite positions and movement, and this takes between five and ten minutes to retrieve if downloading directly from satellite.

As a much quicker alternative to downloading directly from satellite, devices can download the almanac from the internet within seconds via a technology called A-GPS (Assisted GPS). As per the original specification, only actual satellite data available at the moment is transmitted, but several developers have added a weekly forecast of satellite positions to speed up the calculation of coordinates even if the receiver has no internet connection for days to come. The technology is known as the Predicted Satellite Data Service (PSDS), and the aforementioned Qualcomm service is the most impressive implementation to date. Launched in 2007, the service was named “gpsOne XTRA”, renamed to “IZat XTRA Assistance” in 2013, and in its most recent incarnation rebranded again as the “Qualcomm GNSS Assistance Service”.

How satellite signal reception works indoors and what SUPL is

As mentioned above, another problem with geopositioning using a satellite signal is that it may not be available indoors, so there are other ways of determining the location of a smartphone. The classic method from the nineties is to check which cellular base stations can be received at the current spot and to calculate the approximate location of the device by comparing their signal strength knowing the exact position of the stations.

With minor modifications, this is supported by modern LTE networks as well. Smartphones are also able to check for nearby Wi-Fi hotspots and determine their approximate location. This is typically enabled by centralized databases storing information about Wi-Fi access points and provided by specific services, such as Google Location Service.
All existing geopositioning methods are defined by the SUPL (Secure User Plane Location), a standard supported by mobile operators and smartphone, microchip and operating system developers. Any application that needs to know the user’s location gets it from the mobile operating system using the fastest and most accurate combination of methods currently available.

No privacy guaranteed

Accessing SUPL services doesn’t have to result in a breach of user privacy, but in practice, data does often get leaked. When your phone determines your location using nearby cellular base stations, the mobile operator knows exactly which subscriber sent the request and where they were at that moment. Google monetizes its Location Services by recording the user’s location and identifier; however, technically this is unnecessary.

As for A-GPS, servers can, in theory, provide the required data without collecting subscribers’ identifiers at all or storing any of their data. However, many developers do both. Android’s standard implementation of the SUPL sends the smartphone IMSI (unique SIM number) as part of a SUPL request. The Qualcomm XTRA client on the smartphone transmits subscribers “technical identifiers”, including IP addresses. According to Qualcomm, they “de-identify” the data; that is, they delete records linking subscriber identifiers and IP addresses after 90 days, and then use it exclusively for certain “business purposes”.

One important point: data from an A-GPS request cannot be used for pinning down the user’s location. The almanac available from the server is the same anywhere on Earth — it’s the user’s device that calculates the location. In other words, all that the owners of these services could store is information about a user sending a request to the server at a certain time, but not the user’s location.

The accusations against Qualcomm

Publications criticizing Qualcomm are citing research by a certain someone who goes by the name Paul Privacy published on the Nitrokey website. The paper maintains that smartphones with Qualcomm chips send users’ personal data to the company’s servers via an unencrypted HTTP protocol without their knowledge. This allegedly takes place without anyone controlling it, as the feature is implemented at hardware level.

Despite the aforementioned data privacy issues that the likes of the Qualcomm GNSS Assistance Service suffer from, the research somewhat spooks and misleads users, while it contains a number of inaccuracies:

  • In old smartphones, information indeed could have been transmitted over insecure HTTP, but in 2016 Qualcomm fixed that XTRA vulnerability.
  • According to the license agreement, information such as a list of installed applications can be transmitted via the XTRA services, but practical tests (packet inspection and studying the Android source code) showed no proof of this actually happening.
  • Contrary to the researchers’ initial allegations, the data-sharing function is not embedded in the microchip (baseband) but implemented at OS level, so it certainly can be controlled: by the OS developers and by the modding community as well. Replacing and deactivating specific SUPL services on a smartphone has been a known skill since 2012, but this was done to make GPS work faster rather than for privacy reasons.

Spying protection: for everyone and for the extra cautious

So, Qualcomm (probably) does not track us. That said, tracking via geolocation is possible, but on a whole different level: weather apps and other seemingly harmless programs you use on day-to-day basis do it systematically. What we suggest everyone should do is one simple yet important thing: minimize the number of apps that have access to your location. After all, you can choose a place manually to get a weather forecast, and entering a delivery address when shopping online is not that big a deal.

Those of you who want to prevent their location from being logged anywhere should take several extra protective steps:

  • Disable every geolocation service apart from the good old GPS on your smartphone.
  • Use advanced tools to block your phone from accessing SUPL services. Depending on the smartphone model and operating system type, this can be done by filtering the DNS server, a system firewall, a filtering router, or dedicated smartphone settings.
  • It’s best to avoid using cellphones… altogether! Even if you do all of the above, the mobile operator still knows your approximate location at any time.
]]>
full large medium thumbnail
Can I trust my data to repair technicians? | Kaspersky official blog https://www.kaspersky.com/blog/repair-shops-privacy-issues/47715/ Mon, 03 Apr 2023 12:36:36 +0000 https://www.kaspersky.com/blog/?p=47715 Probably everyone has damaged their smartphone, tablet or laptop and needed it repaired at least once in their lives. The cause of the damage may be the user’s own sloppiness: replacing broken smartphone screens brought countless billions of dollars to the industry. But more often, it’s just a random malfunction like the battery failing, the hard drive dying, or a key coming off the keyboard. And this can happen at any time.

Unfortunately, modern devices are made in such a way that even the handiest of computer wizards are often unable to fix them on their own. The repairability of smartphones is steadily decreasing from year to year. To fix the latest models, it takes not only skill and a general understanding of how all sorts of digital gizmos work; you also now need specialist tools, expertise, and access to documentation plus unique spare parts.

Therefore, when a smartphone or laptop breaks, the user usually has little choice other than finding a service center. After all, simply throwing out your broken device, buying another and starting over normally isn’t an option because you’d probably like to recover all the data that was on it. So, it’s over to the service center you head. But there’s a problem: you have to pass your device into the hands of a stranger. Photos and videos, correspondence and call history, documents and financial information can all end up being directly accessible by somebody you don’t know. Can this person be trusted?

Homemade porn viewings at repair shops are a thing

I personally gave this some serious thought recently after what a friend of mine told me. He’d had an informal chat with some guys working at a small repair shop. They told him without any hesitation how they occasionally held viewings of homemade porn found on the devices they repair for employees and their friends!

Similar incidents pop up in the news from time to time. Employees stealing private photos of customers have been found in more than one service center. And sometimes even bigger stories emerge: in one case, service-center employees not only stole photos of female customers for years, but also put together entire collections of them and shared them.

But, surely such incidents are exceptions to common practice? Not every service center has staff eager to get their hands on customers’ personal data, right? Unfortunately, results of a study I recently came across show that breaches of customer privacy by maintenance technicians are a much more common problem than we would all like to think. In fact, it seems highly likely that excessive curiosity on the part of repair staff is a feature of this industry rather than isolated outrageous incidents. But let’s not get ahead of ourselves. I’ll take you through it all step by step.

How electronics repair services treat their customers’ data

A study was conducted by researchers at the University of Guelph in Canada. It consists of four parts, two of them devoted to the analysis of conversations with customers of repair services, and two were field studies in service shops themselves (which I will focus on here). In the first of the “field” parts, the researchers tried to find out how repair shops treat privacy in terms of their intentions. First and foremost, the researchers were interested in what privacy policies or procedures the service shops had in place to safeguard customers’ data.

To do this, the researchers visited nearly 20 service shops of various types (from small local repairers to regional and national service providers). The reason for each visit was to replace the battery in an ASUS UX330U laptop. The reason behind the choice of malfunction was simple: diagnosing the problem and solving it does not require access to the operating system, and all the necessary tools for this are in the laptop’s UEFI (the researchers use the old-fashioned term BIOS).

The researchers’ visits to the service centers involved several steps. First, they looked for any information readily available to the customer regarding the service center’s data privacy policy. Second, they checked to see if the employee taking the device would request the username and password to log in to the operating system and, if so, how they would justify the need to hand that information over (there’s no obvious reason for this because, as stated, battery replacement doesn’t require access to the operating system). Third, the researchers noted how the password for the device being handed over for repair was stored. Finally, fourth, they asked the employee accepting the equipment a direct and unambiguous question: “How do you make sure no one will access my personal data?” to find out what privacy policies and protocols were in place.

The results of this part of the study were disappointing.

  • None of the service shops visited by the researchers informed the “customers” about any respective privacy policy before accepting the device.
  • Except for a single regional center, all services asked for the login password – arguing that it’s simply required for either diagnostics or repair, or to check the quality of provided services (which, as mentioned above, isn’t the case).
  • When asked if it was possible to perform battery replacement without a password, all three national providers replied “no”. At five smaller services they said that without a password they wouldn’t be able to check the quality of work carried out and therefore refused to take responsibility for the results of the repair. Another shop suggested removing the password altogether if the customer didn’t want to share it! And finally, the last shop visited said that if they’re not given the password the device could be reset to factory settings should the maintenance technician need to do so.
  • As for storage of credentials, in almost all cases they were stored in an electronic database along with the customer’s name, phone number and e-mail address, but there was no explanation as to who could access this database.
  • In about half of cases, the credentials were also physically attached to the laptop handed over for repair. It was either printed out and attached as a sticker (in the case of larger services), or simply handwritten on a sticky note – that’s classic! Thus, it would appear that any of the employees of the service shops (maybe even casual visitors too) could have access to the passwords.
  • When asked how data privacy would be guaranteed, the employee who accepted the device and other repair staff gave assurances that only the technician repairing the device would have access to it. However, further inquiries showed that there was no mechanism that could guarantee this; only their word was to be had on this.

So what do maintenance technicians do with customers’ personal data?

Having found out that the service centers have no mechanisms to curb the curiosity of their specialists, in the next part of the study, the researchers began examining what actually happens to a device after it’s handed over for repair. To do this, they bought six new laptops and simulated a basic problem with the audio driver on them. They simply turned it off. Therefore, the “repair” needed just superficial diagnostics and quickly fixing the problem by turning it on. This particular malfunction was chosen since, unlike other services (such as removing viruses from the system), “fixing” the audio driver requires no access to user files whatsoever.

The researchers made up fictitious user identities on the laptops (male users in the first half of the experiment and female users in the second half). They created a browser history, email and gaming accounts, and added various files – including photos of the experimenters. Also added was the first “bait”: a file with the credentials to a cryptocurrency wallet. The second bait was a separate folder containing mildly explicit images. The researchers used real female-coded pictures from Reddit users for the experiment (after having obtained consent beforehand, of course).

Finally, and most importantly, before the laptops were handed over to the service, the researchers turned on the Windows Problem Steps Recorder utility, which records every action performed on the device. After that, the laptops were passed on “for repair” to 16 service centers. Again, to get a complete picture, the researchers visited both small local services and centers of major regional or national providers. The genders of the “customers” were evenly distributed: in eight cases devices were configured with a fictional female persona, and in the other eight – with a male one.

Here’s what the researchers found out:

  • Despite its simplicity, the problem with the audio driver was solved in the “customer’s” presence after a short wait in just two cases. In all other experiments, the laptops had to be left until at least the next day. And the service centers of national service providers kept them in for “repair” for at least two days.
  • For two local services, it wasn’t possible to collect the logs of the repair staff’s actions. In one case, a plausible reason for this couldn’t be found. In the other, the researchers were told that maintenance technicians had to run antivirus software on the device and cleanup its disk due to multiple viruses (the researchers were absolutely sure that at the time of drop-off, the laptop could not have been infected).

In the other cases, the researchers were able to explore the logs; here are their findings:

  • Among the remaining logs, the researchers found six cases where the repairers gained access to personal files or browser history. In four cases, this was recorded on the “females'” laptops; the other two – on the “males'” ones.
  • In half of the incidents, curious service center employees tried to hide traces of their actions by clearing the list of most recently opened Windows files.
  • The repair staff were most interested in image folders. Their contents (including explicit photos) were viewed in five cases. Four of the laptops in these cases “belonged to” females, the other – to male.
  • Browser history was the subject of interest for two laptops – both “belonging to” males.
  • Financial data was viewed once – on a “male’s” device.
  • In two cases, user files were copied by maintenance technicians to an external device. Both times, they were explicit photos, and in one case, the aforementioned financial data was added.
Results of a study on customer privacy violations by service-center employees]

In about half of all cases, service-center employees gained access to user files. They were almost always interested in pictures – including explicit photos

How to protect yourself from nosy maintenance technicians

Of course, it should be borne in mind that this is a Canadian study. It wouldn’t be right to project its results onto all countries. Nevertheless, I somehow doubt that the situation generally around the world is much better. It’s likely that service centers in most countries, just as in Canada, have no cogent mechanisms in place to prevent their employees from violating customer privacy. And it’s also likely that such employees take advantage of the lack of restrictions set by their employers to pry into customers’ personal data – especially that of women.

So, before you take your device to the service center, it’s worth doing a little preparation:

  • Be sure to make a complete backup of all data contained on the device to an external storage device or to the cloud (if possible, of course). It’s standard practice for service centers to make no guarantees as to the safety of customer data, so you may well lose valuable files in the course of a repair.
  • Ideally, your device should be completely cleared of all data and reset to factory settings before taking it in for repair. For example, this is exactly what Apple recommends doing.
  • If clearing and preparing the device for service isn’t possible (for example, your smartphone’s display is broken), then try to find a service that will do everything quickly and directly in front of you. Smaller centers are usually more flexible in this regard.
  • As for laptops, it may be sufficient to hide all confidential information in a crypto container (for instance, using a security solution), or at least in a password-protected archive.
  • Owners of Android smartphones should use the app locking feature in Kaspersky Premium for Android. It allows to lock all your apps using a separate pin code that’s in no way related to the one used to unlock your smartphone.
]]>
full large medium thumbnail
A new method of wiretapping smartphones using an accelerometer | Kaspersky official blog https://www.kaspersky.com/blog/non-standard-smartphone-wiretapping/47113/ Thu, 09 Feb 2023 15:05:02 +0000 https://www.kaspersky.com/blog/?p=47113 In late December 2022, a team of scientists from several US universities published a paper on wiretapping. The eavesdropping method they explore is rather unusual: words spoken by the person you’re talking to on your smartphone reproduced through your phone’s speaker can be picked up by a built-in sensor known as the accelerometer. At first glance, this approach doesn’t seem to make sense: why not just intercept the audio signal itself or the data? The fact is that modern smartphone operating systems do an excellent job of protecting phone conversations, and in any case most apps don’t have permission to record sound during calls. But the accelerometer is freely accessible, which opens up new methods of surveillance. This is a type of side-channel attack, one that so far, fortunately, remains completely theoretical. But, over time, such research could make non-standard wiretapping a reality.

Accelerometer features

An accelerometer is a special sensor for measuring acceleration; together with another sensor, a gyroscope, it helps to detect changes in the position of the phone it resides on. Accelerometers have been built into all smartphones for more than a decade now. Among other things, they rotate the image on the screen when you turn your phone round. Sometimes they are used in games or, say, in augmented reality apps, when the image from the phone’s camera is superimposed with some virtual elements. Step-counters work by tracking phone vibrations as the user walks. And if you flip your phone to mute an incoming call, or tap the screen to wake up the device, these actions too are picked up by the accelerometer.

How can this standard yet “invisible” sensor eavesdrop on your conversations? When the other person speaks, their voice is played through the built-in speaker, causing it, and the body of the smartphone, to vibrate. It turns out that the accelerometer is sensitive enough to detect these vibrations. Although researchers have known about this for some time, the tiny size of these vibrations ruled out full-fledged wiretapping. But in recent years, the situation has changed for the better for the worse: smartphones now boast more powerful speakers. Why? To improve the volume and sound quality when you’re watching a video, for example. A byproduct of this is better sound quality during phone calls since they use the same speaker. The U.S. team of scientists clearly demonstrate this in their paper:

Data from smartphone accelerometers during speech playback

Spectrogram generated while playing the word “zero” six times:
(a) – from accelerometer data of Oneplus 3T ear speaker (older model, no stereo speakers);
(b) – from accelerometer data of Oneplus 7T ear speaker (newer model, with stereo speakers);
(c) – from accelerometer data of Oneplus 7T loud speaker (newer model, with stereo speakers).

On the left is a relatively old smartphone of 2016 vintage, not equipped with powerful stereo speakers. In the center and on the right is a spectrogram from the accelerometer of a more modern device. In each case, the word “zero” is played six times through the speaker. With the old smartphone, the sound is barely reflected in the acceleration data; with the new one, a pattern emerges that roughly corresponds to the played words. The best result can be seen in the graph on the right, where the device is in loudspeaker mode. But even during a normal conversation, with the phone pressed to the ear, there is enough data for analysis. It turns out that the accelerometer acts as a microphone!

Let’s pause here to evaluate the difficulty of the task the researchers set for themselves. The accelerometer may act as a microphone, but a very, very poor one. Suppose we got the user to install malware that tries to eavesdrop on phone conversations, or we built a wiretapping module into a popular game. As mentioned above, our program doesn’t have permission to directly record conversations, but it can monitor the state of the accelerometer. The number of requests to this sensor is limited and depends on the specific model of both the sensor and the smartphone. For example, one of the phones in the study allowed 420 requests per second (measured in Hertz (Hz)), another — 520Hz. Starting with version 12, the Android operating system introduced a limit of 200Hz. Known as the sampling rate, this limits the frequency range of the resulting “sound recording”. It is half the sampling rate at which we can receive data from the sensor. This means that at best the researchers had access to  the frequency range from 1 to 260Hz.

The frequency range for voice transmittance is from around 300 to 3400Hz, but what the accelerometer “overhears” is not a voice: if we try to play back this “recording” we get a murmuring noise that only remotely resembles the original sound. The researchers used machine learning to analyze these voice traces. They created a program that takes known samples of the human voice and compares them with data they captured from the accelerator. Such training further allows a voice recording of unknown content to be deciphered with a certain margin of error.

Spying

For researchers of wiretapping methods, this is all-too familiar. The authors of the new paper refer to a host of predecessors who have shown how to obtain voice data using the seemingly most unlikely of objects. Here’s a real example of a spying technique: from a nearby building, attackers direct an invisible laser beam at the window of the room where the conversation they want to eavesdrop on is taking place. The sound waves from the voices cause the window pane to vibrate ever so slightly, and this vibration is traceable in the reflected laser beam. And this data is sufficient to restore the content of a private conversation. Back in 2020, scientists from Israel showed how speech can be reconstructed from the vibrations of an ordinary light bulb. Sound waves cause small changes in its brightness, which can be detected at a distance of up to 25 meters. Accelerometer-based eavesdropping is very similar to these spying tricks, but with one important difference: The “bug” is already built into the device to be tapped.

Yes, but to what extent can the content of a conversation be recovered from accelerometer data? Although the new paper seriously improves the quality of wiretapping, the method cannot yet be called reliable. In 92% of cases, the accelerometer data made it possible to distinguish one voice from another. In 99% of cases, it was possible to correctly determine gender. Actual speech was recognized with an accuracy of 56% — half of the words could not be reconstructed. And the data set used in the test was extremely limited: just three people saying a number several times in succession.

What the paper did not cover was the ability to analyze the speech of the smartphone user. If we only hear the sound from the speaker, at best we have only half the conversation. When we press the phone to our ear, vibrations from our speech should also be felt by the accelerometer, but the quality is bound to be far worse than the vibrations from the speaker. This remains to be studied in more detail in new research.

Unclear future

Fortunately, the scientists were not looking to create a usable wiretapping device for the here and now. They were simply testing out new methods of privacy invasion that may one day become relevant. Such studies allow device manufacturers and software developers to proactively develop protection against theoretical threats. Incidentally, the 200Hz sampling rate limit introduced in Android 12 does not really help: the recognition accuracy in real experiments has decreased, but not by much. Far greater interference comes from the smartphone user naturally during a conversation: their voice, hand movements, general moving around. The researchers were unable to reliably filter out these vibrations from the useful signal.

The most important aspect of the study was the use of the smartphone’s built-in sensor: all previous methods relied on various additional tools, but here we have out-of-the-box eavesdropping. Despite the modest practical results, this interesting study shows how such a complex device as a smartphone is full of potential data breaches. On a related note, we recently wrote about how signals from Wi-Fi modules in phones, computers, and other devices unwittingly give away their location, how robot vacuum cleaners spy on their owners, and how IP cameras like to peep where they shouldn’t.

And while such surveillance methods are unlikely to threaten the average user, it would be nice if the technology of the future were armed against all risks of spying, eavesdropping, and sneaky peeking, however small. But since these cases involve malware being installed on your smartphone, you should always have the ability to trace and block it.

]]>
full large medium thumbnail
AirTag stalking and how to protect yourself | Kaspersky official blog https://www.kaspersky.com/blog/how-to-protect-from-stalking-with-airtag/43705/ Thu, 17 Feb 2022 17:24:52 +0000 https://www.kaspersky.com/blog/?p=43705 Apple’s AirTags have only been on the market since last spring, but they have already earned a bad reputation for being a way to facilitate criminal activity and track people without their permission. In this article we look closely at how AirTags work and why they can be dangerous. We also tell you how to protect yourself from being tracked with AirTags and from other types of cyberstalking.

How AirTags work

Apple unveiled AirTags in April 2021 as devices that help search for easy-to-lose objects. Inside an AirTag there is a board with a wireless module, along with a replaceable battery and a speaker which is actually rather large, and that’s really the bulk of the device.

Here’s how AirTags work in the simplest scenario: you stick the little fob on your keys, and if one day you’re running late for work and your keys are lost somewhere in your apartment, you activate search mode on your iPhone. Using ultra-wideband (UWB) technology, the phone points you toward the AirTag, giving you helpful prompts like “hot” or “cold.”

In a more complicated scenario, suppose you’ve attached the AirTag to your backpack and one day you rush off the subway so fast you accidentally leave it behind. Since you and your iPhone are already far away from your backpack when you realize you lost it, UWB won’t help you. Now anyone who has a relatively modern Apple device — iPhone 7 and newer — can get involved. Using Bluetooth, they detect the AirTag nearby and transmit approximate or specific coordinates to your Apple account. Now you can use Apple’s Find My service to see where your backpack has ended up — such as in the lost-and-found office or with a new owner. What’s key is that all of this happens automatically; you don’t even need to install anything. Everything the AirTag search system needs to work is already built into the iOS of hundreds of millions of users.

But considering that Bluetooth has a maximum distance range of just a few dozen meters, this works only in large cities, where there are a lot of people with iPhones. If your backpack ends up in a small town where all the residents use Android smartphones (or even the latest push-button phones that barely connect to the Internet), it will be challenging to pin down the location of the AirTag. In this case a third detection mechanism kicks in: if a few hours go by and the AirTag hasn’t had a connection with any iPhone, the built-in speaker starts playing a sound. If the person who finds the item figures out how to connect their smartphone with NFC to the AirTag, the AirTag tells them the phone number of the item’s owner.

AirTags and shady business

In theory, AirTags are a useful and, at $29 for one or $99 for a pack of four, a relatively inexpensive accessory for everyday tracking of easy-to-lose objects. The technology can help you find your hidden keys or a bag you’ve left behind. One example of a useful application that has been widely discussed over the last year is sticking an AirTag on a suitcase before getting on a plane. On a number of occasions, travelers have been able to locate their lost baggage faster than the airline employees could.

But in practice, right after the device went on sale, reports started cropping up about how people used it in ways that were not completely legal, and there were even reports of overt criminal activity. Here are the major examples.

  • An activist from Germany uncovered the location of a top-secret state agency after mailing it an envelope containing an AirTag. A lot of people use such a tactic — which is more or less legal depending on the laws of a country — to track actual mail delivery routes, for example. But it’s also possible to use an AirTag like the German activist did: if someone uses a PO Box to receive mail so they can keep their real address private, a piece of mail that has an AirTag inside it will reveal the actual place of residence.
  • On a more serious note, in December 2021 the Canadian police investigated several incidents in which criminals used AirTags to steal cars. They stuck an AirTag on a car in a public parking lot, used it to figure out where the owner lived, and then at night stole the car while it was parked in a suburb, a little further from potential witnesses.
  • Finally, there are many testimonials involving the use of AirTags to stalk women. In this case, the perpetrators stick an AirTag on a woman’s car or slip it into her bag, and then they ascertain where she lives and see the routes she travels regularly. AirTags contain protection against this kind of stalking: if the tag is constantly moving around while being far away from the iPhone it’s tied to, the built-in speaker starts beeping. However, it didn’t take long for tinkerers to figure out that there’s a workaround: modified AirTags with the beeper disabled have recently started showing up on the market.

But this isn’t even the most frightful scenario. In theory one can hack the AirTag and modify its behavior in the software. Clear steps in this direction have already been made: For instance, last May a researcher successfully gained access to the device’s protected firmware. This will be most dangerous for Apple and users if someone manages to exploit the network of hundreds of millions of iPhones to track people illegally without the knowledge of the manufacturer, the owners of the smartphones that are taking part in a search operation, and the victims themselves.

How dangerous AirTags are

The most frightful scenario has not yet come to pass, and it is unlikely to — after all, Apple cares about the security of its own infrastructure. You also need to keep in mind that there are other devices similar to AirTags. Various legal and illegal tracking devices have existed for over a decade.

Moreover, even consumer tags with similar functionality to AirTags have been on the market for a long time. Tile released its tags in 2013, and they also offer ways to search for lost objects over a large distance by applying the same principle as AirTags. Of course, this company probably won’t be able to achieve “coverage” from hundreds of millions of iPhones. In addition, devices like these cost money — sometimes a lot of money — and they are relatively easy to detect.

In the case of AirTags, they need to be connected to an Apple account, which is hard to create anonymously without providing a real name and usually a credit card number. If the police report a case of illegal tracking, Apple turns over this data — admittedly, you need to convince the police to request such data, and according to testimonials by victims in different countries, this doesn’t always happen.

Ultimately, it’s the same story we always see: AirTags are a handy piece of technology that criminals can also use for malicious purposes. Apple didn’t invent cyberstalking, but it did come up with a convenient technology that enables people to engage in illegal stalking. That means that it’s the company’s responsibility to make it harder for people to use the device for objectionable purposes.

Once again, the closed ecosystem of Apple’s software and devices has come under criticism. If you have an iPhone and someone has snuck an AirTag into your bag, your phone will notify you. But what if you don’t have an iPhone? For the time being, Apple has developed a band-aid solution by releasing an app for Android smartphones that you need to install to detect tracking. The upshot is that Apple created a problem for everyone but offered a simple solution only to its own customers. Everyone else needs to adjust somehow.

This month Apple tried to respond to the avalanche of criticism by issuing a long statement. It acknowledged that before releasing AirTag it hadn’t envisioned all the ways of using it — whether legal or illegal. It pledged to tell AirTag buyers more explicitly that AirTags are not to be used for tracking people. It also plans to raise the volume of the beep that helps you find an AirTag someone has planted on your belongings. This is laudable, but it doesn’t solve all the problems. We hope that over time Apple will be able to clearly separate legal and illegal ways of using AirTags.

Stalkerware

In conclusion, we need to mention that using software for surveillance is much more dangerous and commonplace in real life than AirTags. Apple’s AirTags cost a fair amount of money, a person doing the tracking needs to pair an AirTag with their real account, and the manufacturer is actually trying to make it harder to hide the tags.

In contrast, developers of spyware and stalkerware apps are doing their best to make them as undetectable as possible. In addition to tracking location, tracking apps give the spy a heap of other options. In particular, they open access to the victim’s documents, photos and messages, which can be even more dangerous than geolocation. So if you’re worried about being tracked, the first thing you need to do is protect your smartphone — it’s the most obvious target.

Then you can look around for unknown AirTags. If you use an iPhone, it will notify you pretty quickly that there’s a tag. If you have an Android and you want to protect yourself from being tracked with an AirTag, install the Apple Tracker Detect app.

]]>
full large medium thumbnail
How to find a spy camera with your smartphone | Kaspersky official blog https://www.kaspersky.com/blog/finding-spy-cameras-with-smartphone/43391/ Mon, 17 Jan 2022 14:29:05 +0000 https://www.kaspersky.com/blog/?p=43391 Spy cameras in rented apartments or hotel rooms: fact or fiction? Fact, unfortunately. In a quite recent case, a family from New Zealand, having rented an apartment in Ireland, discovered a hidden camera livestreaming from the living room.

To spot a camera with the naked eye often requires X-ray vision, as it will almost certainly be carefully camouflaged. For those of us who aren’t Superman, there are special devices to help detect spy devices by electromagnetic radiation or Wi-Fi signal, but they are not standard travel items. And to get the most out of them you will need special skills or expert assistance.

That said, researchers in Singapore have recently developed a solution for locating a hidden device using the ToF sensor inside a regular smartphone. The new method goes by the name of LAPD (Laser-Assisted Photography Detection).

What is a ToF sensor?

Even if the terms “ToF sensor” and “ToF camera” mean nothing to you, you might already have encountered one in your smartphone. It is used, for example, to unlock the screen by face, to recognize gestures or to create the beloved bokeh effect — an out-of-focus background in photos.

To solve these tasks, the smartphone needs to see a three-dimensional picture in order to know what’s near to the camera and what’s further away. This is handled by ToF (which by the way stands for Time-of-Flight): the sensor emits an infrared beam and measures the time taken for the rays to get reflected back. The longer this takes, the farther the object is from the sensor, of course.

As the researchers found, in addition to their basic duties, ToF modules do a good job of finding spy cameras. This is because the light emitted by the sensor creates a characteristic glare on the lens, by which the offending device can be recognised.

Pimp my sensor

Since smartphone camera apps with ToF were created for very different purposes, the researchers had to develop a separate app and supplement the sensor’s capabilities to better cope with the search for glare from hidden cameras.

First, they added a smart system that informs the user of the optimal range for scanning for objects. If the smartphone is too close to a suspicious object, the glare from it will be very bright and “oversaturate” the sensor. Conversely, if the device is too far away, the glare will be too weak and the detector won’t be able to register it.

Second, the team applied a filter to sift out extraneous signals, the reason being that the detector can more-or-less accurately identify a hidden lens only in a limited field of view: a cone spanning approximately 20° originating from the smartphone’s camera. Anything outside this cone confuses the detector and leads to false positives.

Lastly, the experts applied self-learning programmable filters to further reduce the false-positive rate. The fact is that the resolution of ToF sensors is very low, only 320×240 pixels. For this reason, it is not easy for the detector to determine the exact size, shape and intensity of the glare — and it’s these parameters that distinguish a hidden camera from other, innocuous objects.

Test results

The researchers carried out an experiment to establish how effective the method is in practice. The study involved 379 volunteers, who were asked to find spy cameras in three ways: with the naked eye; with a specialized detector currently on the market; and with LAPD. They were given 30–60 seconds to search — about as much time as the average traveler is ready to spend on this activity.

The first method delivered unimpressive results. Only 46% of participants were able to detect the hidden cameras by eyesight alone. What’s more, there were very many false positives: in 54.9% of cases, the volunteers pointed to incorrect objects or places.

The specialized detector also showed modest results: in two different scanning modes, the spy cameras were found by just 62.3% and 57.7% of participants. The number of false positives was again high: 26.9% and 35.2%, respectively.

The LAPD method proved the most effective. The spy cameras were found in 88.9% of cases. Even when the researchers altered the lighting to make the search more difficult, the detection rate did not drop below 77.8%. The share of false positives was also low, at 16.67%.

Work in progress

Despite the impressive test results, we won’t be seeing an LAPD app on the App Store or Google Play anytime soon. According to the researchers themselves, modern ToF sensors still lack sufficient resolution and accuracy for in-the-field work; besides, not all smartphones have such a module. So, until the design is improved and manufacturers release new cameras, users will have to remain patient and rely on other methods to sniff out hidden cameras.

]]>
full large medium thumbnail