iphone – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Tue, 27 Jun 2023 04:05:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png iphone – Kaspersky official blog https://www.kaspersky.com/blog 32 32 TriangleDB, spyware implant of Operation Triangulation | Kaspersky official blog https://www.kaspersky.com/blog/triangledb-mobile-apt/48471/ Wed, 21 Jun 2023 14:54:28 +0000 https://www.kaspersky.com/blog/?p=48471 Not so long ago, our technologies detected a new APT attack on iPhones. The attack was part of a campaign aimed at, among others, Kaspersky employees. Unknown attackers used an iOS kernel vulnerability to deploy a spyware implant dubbed TriangleDB in the device’s memory. Our experts have been able to study this implant thoroughly.

What can the TriangleDB implant do?

Studying this implant was no easy task, since it works only in the phone’s memory — leaving no traces in the system. That is, the reboot completely wipes all traces of the attack, and the malware had a self-destruct timer that activated automatically 30 days after the initial infection (if the operators decided not to send a command to extend its working time). The basic functionality of the implant includes the following features:

  • file manipulation (creation, modification, deletion and exfiltration);
  • manipulations with running processes (getting a list and terminating them);
  • exfiltration of iOS keychain elements — which may contain certificates, digital identities, and/or credentials for various services;
  • transmission of geolocation data — including coordinates, altitude, and speed and direction of movement.

Also, the implant can load additional modules into the phone’s memory and run them. If you’re interested in the technical details of the implant, you can find them in a post on the Securelist blog (aimed at cybersecurity experts).

APT attacks on mobile devices

Recently, the main target of APT attacks in general has mostly been traditional personal computers. However, modern mobile devices are these days comparable to office PCs in terms of both performance and functionality. They’re used to interact with business-critical information, store both personal and business secrets, and can serve as access keys to work-related services. Therefore, APT groups are putting all the more effort into designing attacks on mobile operating systems.

Of course, Triangulation is not the first attack aimed at iOS devices. Everyone remembers the infamous (and, unfortunately, still ongoing) case of the commercial spyware Pegasus. There were other examples too, like Insomnia, Predator, Reign, etc. Also, it’s no wonder that APT-groups are interested in the Android OS as well. Not so long-ago news outlets wrote about an attack by the “Transparent Tribe” APT group, which used the CapraRAT backdoor against Indian and Pakistani users of this system. And in the third quarter of last year, we discovered previously unknown spyware targeting Farsi-speaking users.

All this suggests that in order to protect a company from APT attacks these days, it’s necessary to ensure the security of not only stationary equipment — servers and workstations — but also of mobile devices used in the work process.

How to improve your chances against APT attacks on mobiles

It would be wrong to assume that the default protection technologies provided by device manufacturers are enough to protect mobile devices. The Operation Triangulation case clearly shows that even Apple technologies aren’t perfect. Therefore, we recommend that businesses should always employ a multi-level protection system, which includes convenient tools allowing for mobile device control, plus systems that can monitor their network interactions.

The first line of defense should be an MDM class solution. Our Endpoint Security for Mobile, provides centralized management of mobile devices security via Kaspersky Security Center, our administration console. In addition, our solution provides protection against phishing, web threats and malware (for Android only; Apple doesn’t allow third-party antivirus solutions unfortunately).

In particular, it employs Cloud ML for Android technology to detect Android-related malware. This technology, working in KSN cloud, is based on machine learning methods. Model, trained on millions of known Android malware samples detects even previously unknown malware with high precision.

However, threat actors increasingly use mobile platforms in sophisticated targeted attacks. Therefore, it makes sense to use a system that can monitor network activity — be it security information and event management (SIEM) or some other tool that can empower your experts to handle complex cybersecurity incidents with unmatched extended detection and response, such as our Kaspersky Anti Targeted Attack Platform.

The abovementioned Operation Triangulation was discovered by our experts while monitoring a corporate Wi-Fi network using our own SIEM system Kaspersky Unified Monitoring and Analysis Platform (KUMA). In addition, our Threat Intelligence solutions are able to provide security systems and experts with up-to-date information about new threats, as well as about attacker’s techniques, tactics and procedures.

]]>
full large medium thumbnail
Triangulation: Trojan for iOS | Kaspersky official blog https://www.kaspersky.com/blog/triangulation-attack-on-ios/48353/ Thu, 01 Jun 2023 13:23:07 +0000 https://www.kaspersky.com/blog/?p=48353 Hi all,
Today – some breaking cybersecurity news on an incident we’ve just uncovered…

Our experts have discovered an extremely complex, professional targeted cyberattack that uses Apple’s mobile devices. The purpose of the attack is the inconspicuous placing of spyware into the iPhones of employees of at least our company – both middle and top management.

The attack is carried out using an invisible iMessage with a malicious attachment, which, using a number of vulnerabilities in the iOS operating system, is executed on a device and installs spyware. The deployment of the spyware is completely hidden and requires no action from the user. The spyware then quietly transmits private information to remote servers: microphone recordings, photos from instant messengers, geolocation, and data about a number of other activities of the owner of the infected device.

Despite the attack being carried out as discreetly as possible, the infection was detected by the Kaspersky Unified Monitoring and Analysis Platform (KUMA) – a native SIEM solution for security information and event management; in the beginning of the year the system detected an anomaly in our network coming from Apple devices. Further investigation by our team showed that several dozen iPhones of senior employees were infected with new, extremely technologically sophisticated spyware we’ve dubbed “Triangulation”.

Due to the closed nature of iOS, there are no (and cannot be any) standard operating-system tools for detecting and removing this spyware on infected smartphones. To do this, external tools are needed.

An indirect indication of the presence of Triangulation on the device is the disabling of the ability to update iOS. For more precise and reliable recognition of an actual infection, a backup copy of the device needs to be made and then checked with a special utility. More detailed recommendations are set out in this technical article on Securelist. We’re also developing a free detection utility and will make it available once tested.

June 2, 2023 Update: triangle_check utility
We’ve developed and made freely available the triangle_check utility, which can detect indicators of compromise in an Apple device’s backup. Detailed instructions on how to use it under different OSs (Windows, Linux and macOS), as well as how to create a device backup can be found in the post on Securelist.

Due to certain peculiarities inherent in the blocking of iOS updates on infected devices, we’ve not yet found an effective way to remove the spyware without losing user data. It can only be done by resetting infected iPhones to the factory settings and installing the latest version of the operating system and the entire user environment from scratch. Otherwise, even if the spyware is deleted from the device memory following a reboot, Triangulation is still able to re-infect through vulnerabilities in an outdated version of iOS.

Our report on Triangulation represents just the beginning of the investigation into this sophisticated attack. Today we’re publishing the first results of the analysis, but there’s still a lot of work to do. As the incident continues to be investigated, we’ll be updating new data in a dedicated post on Securelist, and will share our full, finalized findings at the international Security Analyst Summit in October (follow the news on the site).

We’re confident that Kaspersky was not the main target of this cyberattack. The coming days will bring more clarity and further details on the worldwide proliferation of this spyware.

We believe that the main reason for this incident is the proprietary nature of iOS. This operating system is a “black box”, in which spyware like Triangulation can hide for years. Detecting and analyzing such threats is made all the more difficult by Apple’s monopoly of research tools – making it a perfect haven for spyware. In other words, as I’ve often said, users are given the illusion of security associated with the complete opacity of the system. What actually happens in iOS is unknown to cybersecurity experts, and the absence of news about attacks in no way indicates their being impossible – as we’ve just seen.

I’d like to remind you that this is not the first , case of a targeted attack against our company. We’re well aware that we work in a very aggressive environment, and have developed the appropriate incident response procedures. Thanks to the measures taken, the company is operating normally, business processes and user data are not affected, and the threat has been neutralized. We continue to protect you, as always.

P.S. Why “Triangulation”?

To recognize the software and hardware specifications of the attacked system, Triangulation uses Canvas Fingerprinting technology and draws a yellow triangle in the device’s memory.

]]>
full large medium thumbnail
Apple Emergency SOS and satellite communications for smartphones | Kaspersky official blog https://www.kaspersky.com/blog/satellite-texts-and-emergency-sos/47132/ Mon, 13 Feb 2023 17:20:21 +0000 https://www.kaspersky.com/blog/?p=47132 Phone improvements have long followed a well-trodden path: brighter screen, more memory, better camera, longer battery life. As a result, there’s less and less to get excited about when it comes to product announcements. But in 2022, Apple, Huawei, and Motorola really did unveil something new and unexpected: texting via satellite. It’s not yet about Instagramming from the top of Everest or in the middle of the Pacific, but you can now at least call for help or report your location with neither Wi-Fi nor 4G.

How it works

Satellite phones have been around for three decades, but they are still expensive, inconvenient, and fairly bulky. An innovation of recent years is satellite connectivity on ordinary phones — but this required new satellites. Previously, satellite phones worked using a small number of high-Earth-orbit satellites. But over the past 5–7 years, the key players — Iridium and Globalstar — have launched quite a few low-Earth-orbit (LEO) satellites, operating at an altitude of just 500–800 kilometers. The most hyped project of this kind is undoubtedly Elon Musk’s Starlink. However, while using similar technology, Starlink is aimed at relatively high-speed internet, and requires the subscriber to purchase a special terminal. However, in late December 2022, the first Starlink Gen2 satellite was launched, which will also provide connectivity for regular – non-satellite – smartphones.

Iridium's LEO constellation and geostationary satellites of another operator. Illustration from iridium.com

Iridium’s LEO constellation and geostationary satellites of another operator. Illustration from iridium.com

The satellites communicate with a phone in the relatively low-frequency L band (1.5–2 GHz). GPS and GLONASS satellites, which orbit at around 20,000 kilometers above Earth, operate in the same frequency range. The advantages of this range are low levels of both signal decay over long distances and weather interference. Thanks to this, the satellite can “hear” the phone’s weak transmitter. The main disadvantage is a low data-transfer rate. That’s why all satellite-based services we’re discussing today basically rely on the SMS format: 140 characters per message, and not a selfie in sight.

To support satellite communication, three things are required from the phone: modem support for the satellite network radio protocol, a modified antenna, and special software. The trickiest is the first of these, because such a modem needs to not only be produced in the first place, but also coordinated with the satellite operator. Not surprisingly, the leader of the pack is Qualcomm, which not only dominates the mobile chipset market, but also has nearly 30 years of experience in satellite systems (after having jointly founded the Globalstar network in 1994). Therefore, the first large-scale launch of satellite telephone communication was made possible by Qualcomm’s knowhow and Apple’s financial muscle. The latter paid for the feature to be implemented in the new iPhone chips, and, more significantly, invested a solid US$450 million in the development of the Globalstar network, its satellites and ground stations.

Apple was the first to enter the market, but for sure won’t remain the monopolist. At the same time, Qualcomm has implemented the feature in its Snapdragon X70 modem chip, which is part of the flagship Snapdragon 8 Gen 2 Mobile Platform. The Snapdragon Satellite service was announced in partnership with the Iridium network, so in H2 2023 we can expect (expensive) smartphones capable of sending and receiving text messages via satellite.

Other players are scrambling aboard too: Huawei plans to provide a similar service in its smartphones using China’s BeiDou Navigation Satellite System (although there are no details on the timing or coverage); Motorola is partnering up with the Skylo (Inmarsat) satellite provider; and the above-mentioned Starlink has entered into an agreement with U.S. operator T-Mobile to co-deploy such a service on T-Mobile’s licensed 1.9 GHz bands.

For future 5G devices, the ability to communicate with satellite base stations instead of ground ones is already standardized. But actual devices with such functionality are set to appear no earlier than 2024.

Quality and coverage

The technology imposes its own limitations, which will be the same whoever makes the phone.

First, it’s certainly slower and less reliable than cellular communication. Thus, the phone will offer the satellite option only if there’s no other connection available, and with major restrictions so as not to overload the network: one 140-character text and no multimedia — in emergencies, for example. Apple demonstrates this very clearly: first, the phone determines the precise location and asks for a few details about the situation, then it integrates the collected information and sends it as one packet.

Collecting information and sending an emergency message by iPhone. Illustration from apple.com

Collecting information and sending an emergency message by iPhone. Illustration from apple.com

Second, the satellite link only works in open spaces. There’s no linkage in thick forest, dense urban areas, or rocky gorges.

Third, sending a text isn’t as simple as we’re used to. You need to: hold the phone in front of you, turn in the right direction, follow the on-screen instructions, and then wait 10–60 seconds until the hundreds of bytes are sent and received.

Instructions for connecting to a satellite with an iPhone. Illustration from apple.com

Instructions for connecting to a satellite with an iPhone. Illustration from apple.com

Fourth, depending on the satellite provider, the service may not be available in certain regions. This is perhaps the biggest drawback at present — the lack of a developed market for either satellite communications or roaming. As such, both Globalstar and Apple offer Emergency SOS in the U.S., southern Canada, and some countries in Western Europe. Satellites do not generally serve high latitudes (above the 62nd parallel), which leaves Alaska and northern Canada, for example, out of reach. The situation with Iridium is better: its satellites work both at the equator and the two poles. The only thing missing is compatible Android terminals from Qualcomm’s partners. Some satellite constellations have gaps in their coverage, so certain places are not served 24/7. This is not relevant for Apple and Qualcomm services, but some competitors may show a “Please try again in half-an-hour” message at the crucial moment.

Prices

No one has any clear idea yet of how much the service should cost. Clearly, it won’t be mass-market, because most people live inside the regular cellular network coverage area. What the surcharge for emergency communications will be, and in what format, the market will test and determine in the coming years. Apple offers it as a free service, but only for two years after purchasing a new iPhone. What the subscription fee will be after that the company hasn’t announced. But usage will be modest, because Apple is positioning the feature solely as an emergency communication channel. All we know for now (at the time of posting this blogpost) is that Motorola plans to charge US$5 for 30 messages. But these can be any messages — not just emergency ones.

Security

Texting is widely known to be an insecure communication channel. So what about the privacy of satellite texts? Apple says its messages are packaged and encrypted, making them near impossible to fake or intercept when sent from a phone to a satellite. However, since they pertain to an emergency, the company immediately forwards the information to the response center closest to the subscriber (rescuers, firefighters, etc.), where it is no longer encrypted and is processed according to that center’s procedures. The same is true for the Snapdragon Satellite service, which relies on Garmin inReach infrastructure: the data transfer itself is encrypted, but the operators then handle the decrypted text. When texting friends, and not the emergency services, don’t count on end-to-end encryption — all specifications only mention in-transit encryption. The good news is that this rules out substitution of the sender address or replacement of the message text.

For a glimpse into what to expect from phones in the foreseeable future, take a look at the ads for the inReach service, for which specialized devices have long been available. Among the potentially unsafe features in terms of privacy is periodic sending of the subscriber’s location to the satellite to enable their friends to track their ascent up a mountain, for example. To date, no service based on conventional smartphones is touting this option — only on-demand sending of location. But given that you have to pull out your smartphone and spin around in search of a satellite, there’s no need to worry about stealthy location sending, at least for now. But it’s worth keeping an eye on the development of this technology.

]]>
full large medium thumbnail
How Apple’s Lockdown Mode works | Kaspersky official blog https://www.kaspersky.com/blog/apple-lockdown-mode/45061/ Mon, 01 Aug 2022 15:41:27 +0000 https://www.kaspersky.com/blog/?p=45061 In July 2022, Apple announced a new protection feature for its devices. Called “Lockdown Mode,” it severely restricts the functionality of your Apple smartphone, tablet or laptop. Its purpose is to reduce the success rate of targeted attacks, which politicians, activists and journalists, among others, are subjected to. Lockdown Mode is set to appear in the upcoming releases of iOS 16 (for smartphones), iPadOS 16 (for tablets) and macOS 13 Ventura (for desktops and laptops).

For ordinary users, this operating mode is likely to cause more of an inconvenience than actual good. For this reason, Apple recommends it only to users whose activities mean they are likely to face targeted attacks. In this post, we analyze the ins and outs of Lockdown Mode, compare the new restrictions with the capabilities of well-known exploits for Apple smartphones and examine why this mode, although useful, is no silver bullet.

Lockdown Mode in detail

Before the end of this year, with the release of the new versions of iOS, your Apple smartphone or tablet (if relatively recent, that is, no earlier than 2018) will have the new Lockdown Mode in its settings.

Lockdown Mode activation screen on an Apple smartphone

Lockdown Mode activation screen on an Apple smartphone. Source

After activation, the phone will reboot, and some small (but, for some people, vital) features will stop working. For example, iMessage attachments will be blocked and websites may stop working properly in the browser. It will be harder to reach you by people you’ve had no contact with before. All of these restrictions are an effort to close the entry points most commonly exploited by attackers.

Digging deeper, Lockdown Mode introduces the following restrictions on your Apple device:

  1. In iMessage chats, you can see only text and images sent to you. All other attachments will be blocked.
  2. Some technologies will be disabled in the browsers, including just-in-time compilation.
  3. All incoming invitations to communicate through Apple services will be blocked. For example, you will be unable to make a FaceTime call if you have not previously chatted with the other user.
  4. If locked, your smartphone will not interact in any way with your computer (or other external devices connected with a cable).
  5. It won’t be possible to install configuration profiles or enroll the phone into Mobile Device Management (MDM).

The first three measures aim to limit the most common remote targeted attack vectors on Apple devices: an infected iMessage, a link to a malicious website and an incoming video call.

The fourth is designed to protect from connecting your iPhone, if left unattended, to a computer and having any valuable information stolen through a vulnerability in the communication protocol.

And the fifth restriction makes it impossible to connect a smartphone in Lockdown Mode to an MDM system. Normally, companies often use MDM for security purposes, such as deleting information on a lost phone. But this feature can also be used to steal data, since it gives the MDM administrator wide-ranging control over the device.

All in all, Lockdown Mode sounds like a good idea. Maybe we should all put up with some inconvenience to stay safe?

Features versus bugs

Before addressing this question, let’s assess how radical Apple’s solution actually is. If you think about it, it’s the exact opposite of all established norms in the industry. Usually, it goes like this: first, a developer comes up with a new feature, rolls it out and then wrestles to rid the code of bugs. With Lockdown Mode, on the other hand, Apple proposes giving up a handful of existing features for the sake of better protection.

A simple (and purely theoretical) example: suppose the maker of a messenger app adds the ability to exchange beautiful animated emojis, and even create your own. Then it turns out that it’s possible to create an emoji that causes the devices of all recipients to constantly reboot. Not nice.

To avoid this, the feature should have been scrapped, or had more time spent on vulnerability analysis. But it was more important to release and monetize the product as quickly as possible. In this behind-the-scenes struggle between security and convenience, the latter always won. Until now — for Apple’s new mode places security ahead of everything else. There’s only one word to describe it: cool.

Does it mean that iPhones without Lockdown Mode are unsafe?

Apple mobile devices are already pretty secure, which is important in the context of this announcement. Stealing data from an iPhone isn’t easy, and Apple is bending over backwards to keep it that way.

For example, your biometric information for unlocking your phone is stored only on the device and is not sent to the server. Data in the phone’s storage is encrypted. Your PIN to unlock the phone cannot be brute-forced: after several wrong attempts, the device is locked. Smartphone apps run in isolation from each other and cannot, generally speaking, access data stored by other apps. Hacking an iPhone is getting harder every year. For most users, this level of security is more than sufficient.

So why add yet more protection?

The question concerns a fairly small number of users whose data is so valuable that those who want it are prepared to go to extraordinary lengths to get it. Extraordinary lengths in this context means spending a lot of time and money on developing complex exploits able to bypass known protection systems. Such sophisticated cyberattacks threaten only a few tens of thousands people in the whole world.

This ballpark figure is known to us from Pegasus Project. In 2020, a list was leaked of some 50,000 names and phone numbers of individuals who allegedly had (or could have) been attacked using a piece of spyware developed by NSO Group. This Israeli company has long been criticized for its “legal” development of hacking tools for clients, who include many intelligence agencies worldwide.

NSO Group itself denied any link between its solutions and the leaked list of targets, but evidence later emerged that activists, journalists and politicians (all the way up to heads of state and government) had indeed been attacked using the company’s technologies. Developing exploits, even legally, is a dodgy business that can result in the leakage of extremely dangerous attack methods, which anyone can then use.

How sophisticated are exploits for iOS?

The complexity of these exploits can be gauged by looking at a zero-click attack that Google’s Project Zero team investigated at the end of last year. Normally, the victim at least has to click a link to activate the attacker’s malware, but “zero-click” means that no user action is required to compromise the targeted device.

Particularly in the case described by Project Zero, it is sufficient to send a malicious message to the victim in iMessage, which on most iPhones is enabled by default and replaces regular texts. In other words, it is enough for an attacker to know the victim’s phone number and send a message, whereupon they gain remote control over targeted device.

The exploit is very complicated. In iMessage, the victim receives a file with the GIF extension, that is actually not a GIF at all but rather a PDF compressed using certain algorithm that was fairly popular back in the early 2000s. The victim’s phone attempts to show a preview of this document. In most cases, Apple’s own code is used for this, but for this particular compression a third-party program is employed. And in it, a vulnerability was found — a not particularly remarkable buffer overflow error. To put it as simply as possible, built around this minor vulnerability is a separate and independent computational system, which ultimately executes malicious code.

In other words, the attack exploits a number of non-obvious flaws in the system, each of which seems insignificant in isolation. However, if they are strung together in a chain, the net result is iPhone infection by means of a single message, with no user clicks required.

This, quite frankly, is not something a teenage hacker might accidentally stumble across. And not even what a team of regular malware writers might create: they are usually after a much more direct route to monetization. Such a sophisticated exploit must have required many thousands of hours and many millions of dollars to create.

But let’s remember a key feature of Lockdown Mode mentioned above: almost all attachments are blocked. This is precisely to make zero-click attacks far harder to pull off, even if the iOS code does contain the corresponding bug.

The remaining features of Lockdown Mode are there to close other common “entry points” for targeted attacks: web browser, wired connection to a computer, incoming FaceTime calls. For these attack vectors, there already exist quite a few exploits, though not necessarily in Apple products.

What are the chances of such an elaborate attack being deployed against you personally if you are not on the radar of intelligence services? Pretty much zero unless you get hit by accident. Therefore, for the average user, using Lockdown Mode doesn’t make much sense. There is little point in making your phone or laptop less usable in exchange for a slight decrease in the chances of being at the end of a successful attack.

Not by lockdown alone

On the other hand, for those who are in the circle of potential targets of Pegasus and similar spyware, Apple’s new Lockdown Mode is certainly a positive development, but not a silver bullet.

In addition to (and, until its release, instead of) Lockdown Mode, our experts have a few other recommendations. Keep in mind, this is about a situation in which someone very powerful and very determined is hunting for your data. Here are a few tips:

  • Reboot your smartphone daily. Creating an iPhone exploit is already hard, making it resistant to a reboot is much harder. Turning off your phone regularly will provide a little more protection.
  • Disable iMessage altogether. Apple is unlikely to recommend this, but you can do it yourself. Why just reduce the chances of an iMessage attack when you can eliminate the whole threat in one fell swoop?
  • Do not open links. In this case, it doesn’t even matter who sent them. If you really need to open a link, use a separate computer and preferably the Tor browser, which hides your data.
  • If possible, use a VPN to mask your traffic. Again, this will make it harder to determine your location and harvest data about your device for a future attack.

For more tips, see Costin Raiu’s post “Staying safe from Pegasus, Chrysaor and other APT mobile malware.”

]]>
full large medium thumbnail
Can a powered-off iPhone be hacked? | Kaspersky official blog https://www.kaspersky.com/blog/hacking-powered-off-iphone/44530/ Tue, 07 Jun 2022 15:19:15 +0000 https://www.kaspersky.com/blog/?p=44530 Researchers from the Secure Mobile Networking Lab at the University of Darmstadt, Germany, have published a paper describing a theoretical method for hacking an iPhone — even if the device is off. The study examined the operation of the wireless modules, found ways to analyze the Bluetooth firmware and, consequently, to introduce malware capable of running completely independently of iOS, the device’s operating system.

With a little imagination, it’s not hard to conceive of a scenario in which an attacker holds an infected phone close to the victim’s device and transfers malware, which then steals payment card information or even a virtual car key.

The reason it requires any imagination at all is because the authors of the paper didn’t actually demonstrate this, stopping one step short of a practical attack implementation in which something really useful nasty is loaded into the smartphone. All the same, even without this, the researchers did a lot to analyze the undocumented functionality of the phone, reverse-engineer its Bluetooth firmware, and model various scenarios for using wireless modules.

So, if the attack didn’t play out, what’s this post about? We’ll explain, don’t worry, but first an important statement: if a device is powered off, but interaction with it (hacking, for example) is somehow still possible, then guess what — it’s not completely off!

How did we get to the point where switching something off doesn’t necessarily mean it’s actually off? Let’s start from the beginning…

Apple’s Low Power Mode

In 2021, Apple announced that the Find My service, which is used for locating a lost device, will now work even if the device is switched off. This improvement is available in all Apple smartphones since the iPhone 11.

If, for example, you lose your phone somewhere and its battery runs out after a while, it doesn’t turn off completely, but switches to Low Power Mode, in which only a very limited set of modules are kept alive. These are primarily the Bluetooth and Ultra WideBand (UWB) wireless modules, as well as NFC. There’s also the so-called Secure Element — a secure chip that stores your most precious secrets like credit card details for contactless payments or car keys — the latest feature available since 2020 for a limited number of vehicles.

Bluetooth in Low Power Mode is used for data transfer, while UWB — for determining the smartphone’s location. In Low Power Mode, the smartphone sends out information about itself, which the iPhones of passers-by can pick up. If the owner of a lost phone logs in to their Apple account online and marks the phone as lost, information from surrounding smartphones is then used to determine the whereabouts of the device. For details of how this works, see our recent post about AirTag stalking.

The announcement quickly prompted a heated discussion among information security experts about the maze of potential security risks. The research team from Germany decided to test out possible attack scenarios in practice.

When powering off the phone, the user now sees the “iPhone Remains Findable After Power Off” message. Source

Find My after power off

First of all, the researchers carried out a detailed analysis of the Find My service in Low Power Mode, and discovered some previously unknown traits. After power off, most of the work is handled by the Bluetooth module, which is reloaded and configured by a set of iOS commands. It then periodically sends data packets over the air, allowing other devices to detect the not-really-off iPhone.

It turned out that the duration of this mode is limited: in version iOS 15.3 only 96 broadcast sessions are set with an interval of 15 minutes. That is, a lost and powered-off iPhone will be findable for just 24 hours. If the phone powered off due to a low battery, the window is even shorter — about five hours. This can be considered a quirk of the feature, but a real bug was also found: sometimes when the phone is off, the “beacon” mode is not activated at all, although it should be.

Of most interest here is that the Bluetooth module is reprogrammed before power off; that is, its functionality is fundamentally altered. But what if it can be reprogrammed to the detriment of the owner?

Attack on a powered-off phone

In fact, the team’s main discovery was that the firmware of the Bluetooth module is not encrypted and not protected by Secure Boot technology. Secure Boot involves multistage verification of the program code at start-up, so that only firmware authorized by the device manufacturer can be run.

The lack of encryption permits analysis of the firmware and a search for vulnerabilities, which can later be used in attacks. But the absence of Secure Boot allows an attacker to go further and completely replace the manufacturer’s code with their own, which the Bluetooth module then executes. For comparison, analysis of the iPhone’s UWB module firmware revealed that it’s protected by Secure Boot, although the firmware isn’t encrypted either.

Of course, that’s not enough for a serious, practical attack. For that, an attacker needs to analyze the firmware, try to replace it with something of their own making, and look for ways to break in. The authors of the paper describe in detail the theoretical model of the attack, but don’t show practically that the iPhone is hackable through Bluetooth, NFC or UWB. What’s clear from their findings is that if these modules are always on, the vulnerabilities likewise will always work.

Apple was unimpressed by the study, and declined to respond. This in itself, however, says little: the company is careful to keep a poker face even in cases when a threat is serious and demonstrated to be so in practice.

Bear in mind that Apple goes to great lengths to keep its secrets under wraps: researchers have to deal with closed software code, often encrypted, on Apple’s own hardware, with made-to-order third-party modules. A smartphone is a large, complex system that’s hard to figure out, especially if the manufacturer hinders rather than helps.

No one would describe the team’s findings as breathtaking, but they are the result of lots of painstaking work. The paper has merit for questioning the security policy of powering off the phone, but keeping some modules alive. The doubts were shown to be justified.

A half powered-off device

The paper concludes that the Bluetooth firmware is not sufficiently protected. It’s theoretically possible either to modify it in iOS or to reprogram the same Low Power Mode by expanding or changing its functionality. The UWB firmware can also be examined for vulnerabilities. The main problem, however, is that these wireless modules (as well as NFC) communicate directly with the protected enclave that is Secure Element. Which brings us to some of the paper’s most exciting conclusions:

Theoretically, it’s possible to steal a virtual car key from an iPhone — even if the device is powered off! Clearly, if the iPhone is the car key, losing the device could mean losing the car. However, in this case the actual phone remains in your possession while the key is stolen. Imagine it like this: an intruder approaches you at the mall, brushes their phone against your bag, and steals your virtual key.

It is theoretically possible to modify the data sent by the Bluetooth module, for example, in order to use a smartphone to spy on a victim — again, even if the phone is powered off.

Having payment card information stolen from your phone is another theoretical possibility.

But all this of course still remains to be proven. The work of the team from Germany shows once more that adding new functionality carries certain security risks that must be taken into account. Especially when the reality is so different from the perception: you think your phone is fully off, when in fact it isn’t.

This is not a completely new problem, mind. The Intel Management Engine and AMD Secure Technology, which also handle system protection and secure remote management, are active whenever the motherboard of a laptop or desktop computer is connected to a power source. As in the case of the Bluetooth/UWB/NFC/Secure Element bundle in iPhones, these systems have extensive rights inside the computer, and vulnerabilities in them can be very dangerous.

On the bright side, the paper has no immediate impact on ordinary users: the data obtained in the study is insufficient for a practical attack. As a surefire solution, the authors suggest that Apple should implement a hardware switch that kills the power to the phone completely. But given Apple’s physical-button phobia, you can be sure that won’t happen.

]]>
full large medium thumbnail
Why you need to always update Safari on iPhone | Kaspersky official blog https://www.kaspersky.com/blog/always-update-safari-on-iphone/44039/ Mon, 04 Apr 2022 13:34:04 +0000 https://www.kaspersky.com/blog/?p=44039 Lots of iPhone users aren’t crazy about the iOS built-in browser, Safari, and prefer to use an alternative — Google Chrome, Mozilla Firefox, or even something more exotic like DuckDuckGo, Brave or Microsoft Edge (yes, there’s Edge for iOS!).

iPhone users who prefer alternative browsers might get lulled into thinking that the vulnerabilities in Safari and the WebKit engine don’t present a direct danger to them. Unfortunately, this isn’t the case. In this post, we give you the lowdown and tell you why you need to make sure that Safari and WebKit on your iPhone are always updated in time.

Every browser in iOS is Safari

Every browser is based on what is called an “engine.” The engine processes the code that is received from the internet and transforms it into the web pages that the browser ultimately shows the user. Of course, the browser has a bunch of other necessary and useful parts that direct the engine and ensure that the additional features work. Think of the browser engine like the engine of a car: it’s the most important part of a browser and without it you won’t get anywhere.

There are three major browser engines in the world. Google uses its own V8 engine in its Chrome and Chromium browsers, while Microsoft Edge and dozens of other browsers are based on Chromium. There is also the Gecko engine — its modern version is called Quantum — which Mozilla developed and supports for the Firefox browser and a few others. Finally, the third giant of the modern web is Apple’s engine — Webkit, which is used in the Safari browser.

But here’s the thing. The Chrome and Firefox versions for desktop computers and Android are built on Google’s V8 engine and Mozilla’s Gecko/Quantum engine, respectively. However, it’s a different story for iPhones. In keeping with Apple’s policies, there is only one engine permitted in iOS — you guessed it: WebKit. This means that all browsers for iOS are essentially Safari with different user interfaces.

Excerpt from the iOS app developer rules:

Excerpt from the iOS app developer rules: “Apps that browse the web must use the appropriate WebKit framework and WebKit JavaScript.”

This means that all vulnerabilities found in WebKit present a danger for users of any browsers for iOS. Since iPhones are a very tempting target for hackers, security specialists study the WebKit engine all the more closely, and as a result, they find vulnerabilities in it rather often. This includes vulnerabilities that attackers are already using in the wild.

One of the most dangerous types of vulnerabilities in a browser engine is a so-called zero-click vulnerability, which allows bad actors to infect an iPhone without any action by the user. When this kind of vulnerability is exploited, the user doesn’t need to be convinced to download or install anything. All the attacker needs do is draw the victim to a specially built website with malicious code or hack a popular site and implant the malicious code in it. After the user visits such a site through a vulnerable browser, the attackers can take control of the iPhone.

How to update Safari and WebKit

It’s important to remember that the update of the WebKit engine and Safari browser isn’t related to the update of the browser apps you’re using. Google Chrome automatically updates from the App Store — that is, if you haven’t disabled this option, and we don’t recommend that you do — but in essence this is an update of the shell program, not the engine. So this won’t solve the problem of vulnerabilities in WebKit.

To avoid vulnerabilities in both the WebKit engine and Safari browser, you need to install the appropriate iOS updates. The best thing to do is to make sure to install all the latest operating system updates — after all, the vulnerabilities aren’t just in the browser engine but also in other important components of iOS.

To update your iPhone, go to Settings → General → Software Update. If you see a button on the screen that says Download and Install, tap it and follow the instructions.

Where to find the iOS update in your iPhone's settings

Where to find the iOS update in your iPhone’s settings

Don’t be afraid of iOS updates

A lot of users are lukewarm about updating the operating system: some people don’t like having to get used to new features in the interface, some worry about having less storage, while others fear that after an update the iPhone may start to slow down or some old apps that are no longer supported in the new version will stop working.

These fears aren’t totally unfounded. It’s true that Apple does sometimes make the interface less user-friendly. It’s also true that each new version of the system takes up a bit more storage than the previous one and leaves less space for your files. And it’s no myth that iPhones have slowed down after an update — this has been documented.

But we still recommend that you always keep your iPhone updated: doing so is crucial for keeping your data safe and ensuring that it doesn’t fall into the wrong hands. Unfortunately, there is no full-fledged antivirus for iOS. That means that the iPhone’s security is contained only in Apple’s protection mechanisms, so any hole in them without a system update remains an open door for hackers.

]]>
full large medium thumbnail
Update iOS! There is a dangerous vulnerability in WebKit (CVE-2022-22620) | Kaspersky official blog https://www.kaspersky.com/blog/webkit-vulnerability-cve-2022-22620/43650/ Fri, 11 Feb 2022 10:30:03 +0000 https://www.kaspersky.com/blog/?p=43650 Apple has released an urgent update for iOS and iPadOS that fixes the CVE-2022-22620 vulnerability. They recommend updating devices as soon as possible, as the company have reason to believe that the vulnerability is already being actively exploited by unknown actors.

Why vulnerability CVE-2022-22620 is dangerous

As usual, Apple experts do not disclose the details of the vulnerability until the investigation is completed, and the majority of users have the patches installed. At the moment, they only say that the vulnerability belongs to the Use-After-Free (UAF) class, therefore it is related to incorrect use of dynamic memory in applications. Its exploitation allows the attacker to create malicious web content, the processing of which can lead to arbitrary code execution on the victim’s device.

Simply put, the most likely attack scenario is an infection of an iPhone or iPad device after visiting a malicious web page.

Which devices and apps are vulnerable to CVE-2022-22620 exploitation

Judging by the description of the bug, the vulnerability was found in the WebKit engine used in many applications for macOS, iOS and Linux. In particular, all browsers for iOS and iPadOS are based on this open source engine — that is, not only iPhone’s default Safari, but also Google Chrome, Mozilla Firefox and any others. So even if you do not use Safari, this vulnerability still affects you directly.

Apple released updates for iPhones 6s and newer; all models of iPad Pro, iPad Air version 2 and newer, iPad starting with the 5th generation, iPad mini starting with 4th generation, and iPod touch media player starting with the 7th generation.

How to stay safe

The patches that Apple released on February 10 changes memory management mechanisms and thus prevents exploitation of CVE-2022-22620. So in order to protect your device, it should be enough to install iOS 15.3.1 and iPadOS 15.3.1 updates. Your device needs to be connected to a Wi-Fi network to install the patch.

If your device does not yet show a notification that the update is ready for installation, you can force your system into updating a little bit quicker: go to the system settings yourself (Settings → General → Software update) and check the availability of software updates.

In order to get alerts about the latest cyberthreats directly related to your devices and apps, we recommend using the Kaspersky Security Cloud, available for Windows, macOS, Android and iOS operating systems. When a new vulnerability in the software you use, or a data leak on the website you visit is discovered, you will get a notification with advice on how to protect yourself.

]]>
full large medium thumbnail
What is NoReboot and how to protect yourself from such an attack | Kaspersky official blog https://www.kaspersky.com/blog/what-is-noreboot-attack-and-how-to-protect-your-smartphone/43292/ Mon, 10 Jan 2022 16:41:24 +0000 https://www.kaspersky.com/blog/?p=43292 To be absolutely sure your phone isn’t tracking you or listening in on any conversations, you might turn it off. It seems logical; that way, even if the phone is infected with serious spyware, it can’t do anything.

In addition, turning off or restarting a smartphone is one of the most reliable ways to fight such infections; in many cases, spyware “lives” only until the next reboot because it cannot gain a permanent foothold in the operating system. At the same time, the vulnerabilities that allow malware to work even after a reboot are rare and expensive to exploit.

However, this tactic might not work forever. Researchers have come up with a technique to bypass it using a method they have named NoReboot. In essence, this attack is a fake restart.

What is NoReboot, and how does the attack work?

We want to note right off the bat that NoReboot is not a feature of any real spyware in use by attackers; rather, it’s a so-called proof of concept that researchers demonstrated under laboratory conditions. At this point it is hard to say whether the method will actually gain traction.

For the demonstration, the researchers used an iPhone they “infected” beforehand. Unfortunately, they haven’t shared the technical details. Here’s what happens in the demonstration:

  • The spy malware, which transfers the image from the camera, runs on the iPhone;
  • The user tries to shut off the phone the usual way, using the power and volume buttons;
  • The malware takes control and shows a perfect fake instead of the standard iOS shutdown screen;
  • After the user drags the power-off slider, which also looks perfectly normal, the smartphone’s screen goes dark and the phone no longer responds to any of the user’s actions;
  • When the user presses the power button again, the malware displays a perfect replica of the iOS boot animation.
  • During the entire process, the phone is continually transferring the image from the phone’s front camera to another device without the user’s knowledge.

As is often the case, seeing is believing, and we recommend checking out the researchers’ video:

How to protect yourself against NoReboot

Again, at least for now NoReboot is only a demonstration of the feasibility of an attack. The attack is alarming, to be sure, but don’t forget that malware needs to get onto a smartphone before it can do any damage. Here are some tips to help you prevent that from happening:

  • Keep in mind that it’s much harder for attackers to infect a smartphone remotely than if they have physical access to it. Be careful not to let someone else get hold of your smartphone — especially for a long period of time — and install a reliable device lock.
  • People most often install malware on their smartphones on their own, voluntarily. Be careful about what you download and avoid installing unnecessary apps — that is, those you can easily live without — as a general rule.
  • Don’t root or jailbreak your smartphone (at least if you haven’t been using *nix systems for many years). Superuser rights make malware’s work exponentially easier.
  • If you have an Android device, we recommend installing an antivirus solution — to block Trojans from penetrating the system.
  • Let your smartphone die a natural death from time to time — that is, wait for the charge to run out completely. The phone will then most certainly restart without any fakes, and there’s an excellent chance that spies will disappear from the system. You can speed up the process by using a resource-hungry app, such as a game or benchmark-test utility.
]]>
full large medium thumbnail
How to set app permissions in iOS 15 | Kaspersky official blog https://www.kaspersky.com/blog/ios-15-permissions-guide/43041/ Thu, 02 Dec 2021 07:27:32 +0000 https://www.kaspersky.com/blog/?p=43041 With each version of iOS, we’ve seen developers try to protect user data better. However, the core principle remains unchanged: You, the user, gets to decide what information to share with which apps. With that in mind, we’ve put together an in-depth review of app permissions in iOS 15 to help you decide which requests to allow and which to deny.

Where to find iOS 15 app permission settings

iOS 15 offers several ways to manage permissions. We’ll talk about each of the three methods separately.

Managing permissions when you first launch an app

Every app requests permission to access certain information the first time you launch it, and that’s the easiest time to choose what data to share with the app. But even if you accidentally press “Yes” instead of “No,” you can still change it later.

Setting up all permissions for a specific app

To see and set all permissions for a particular app at once, open the system settings and scroll down to see a list of installed applications. Select an app to see what permissions it has and revoke them if you need to.

Setting specific permissions for different applications

Go to Settings → Privacy. In this section, you will find a long list of basic iOS 15 permissions. Click on each permission to see which applications requested it. You can deny access to any of them at any time.

Not all permissions are in the Privacy menu; you’ll need to go to other settings sections to configure some of them. For example, you can disable mobile data transfer for apps in the Mobile section, and permission to use the Internet in the background is configured in the Background App Refresh section.

Now you know where to look for what. Next, we’ll go into more detail about all of iOS’s permissions.

Location Services

What it is: Permission to access your location. This permission isn’t just about GPS; apps can also navigate using mobile network base stations, Bluetooth, and the coordinates of Wi-Fi hotspots you are connected to. Access to location services is used, for example, by maps to plot routes and show you nearby businesses.

What the risks are: Having location access enables apps to map your movements accurately. App developers can use that data for marketing purposes, and cybercriminals can use it to spy on you.

You may not want to give this permission to an app if you don’t fully trust it or don’t think it needs that level of information. For example, social networks can do without location access if you don’t add geotags to your posts or if you prefer to do so manually.

In case you need an app that needs location access to work properly, here are two ways to protect yourself from being tracked:

  • Allow access to location only while using the app to give the app access to your coordinates only when you are actually using it. If the app wants to receive location information in the background, you will be notified and may opt out.
  • Turn off Precise Location to restrict the app’s knowledge of your location. In this case, the margin of error will be about 25 square kilometers (or 10 square miles) — that’s comparable to the area of a small city.

What’s more, iOS has long had an indicator that lets you know that an app is requesting access to your location. With iOS 15, that indicator has become much more prominent, appearing as a bright blue icon with a white arrow at the top of the screen.

When an app is accessing your location, iOS 15 shows a bright blue icon with a white arrow inside

Where to configure it: Settings → Privacy → Location Services

Tracking

What it is: Permission to access a unique device identifier — the Identifier for Advertisers, or IDFA. Of course, each individual application can track a user’s actions in its own “territory.” But access to IDFA allows data matching across apps to form a much more detailed “digital portrait” of the user.

So, for example, if you allow tracking in all applications, then a social network can not only see all of your records and profile information in it, but also find out what games you play, what music you listen to, the weather in cities you are interested in, what movies you watch, and much more.

What the risks are: Tracking activity in apps enables the compilation of a much more extensive dossier on the phone’s owner, which increases advertising efficacy. In other words, it can encourage you to spend more money.

Starting in iOS 14.5, users gained the ability to disable tracking requests in apps.

Where to configure it: Settings → Privacy → Tracking

Contacts

What it is: Permission to access your address book — to read and change existing contacts and to add new contacts. Data an app can get with this permission includes not only names, phone numbers, and e-mail addresses, but also other information from your list of contacts, including notes about specific contacts (although apps need separate approval from Apple to access the notes).

What the risks are: Databases of contacts — with numbers, addresses, and other information — can, for example, be used to attack an organization, send spam, or conduct phone scams.

Where to configure it: Settings → Privacy → Contacts

Calendars

What it is: Permission to view, change, and add calendar events.

What the risks are: The app will receive all of your personal calendar information, including past and scheduled events. That may include doctor’s appointments, meeting topics, and other information you don’t want to share with outsiders.

Where to configure it: Settings → Privacy → Calendars

Reminders

What it is: Permission to read and change existing reminders and add new ones.

What the risks are: If you have something personal recorded in your Reminders app, such as health data or information about family members, you may not want to share it with any app developers.

Where to configure it: Settings → Privacy → Reminders

Photos

What it is: This permission allows the app to view, add, and delete photos and videos in your phone’s gallery. The app also can read photo metadata, such as information about where and when a photo was taken. Apps that need access to Photos include image editors and social networks.

What the risks are: A personal photo gallery can say a lot about a person, from who their friends are and what they’re interested in to where they go, and when. In general, even if you don’t have nude photos, pictures of both sides of your credit card or screenshots with passwords in the gallery, you should be cautious about giving apps access to yours.

Starting with iOS 14, Apple developers added the ability to give an app access to individual files without giving them the entire gallery. For example, if you want to post something on Instagram, you can choose precisely which images to upload and keep your other photos invisible to the social network. In our opinion, that’s the best option for providing access to your images.

Where to configure it: Settings → Privacy → Photos

Local Network

What it is: Permission to connect to other devices on your local network, for example, to play music with AirPlay, or to control your router or other gadgets.

What the risks are: With this type of access, applications can collect information about all of the devices on your local network. Data about your equipment can help an attacker find vulnerabilities, hack your router, and more.

Where to configure it: Settings → Privacy → Local Network

Nearby Interaction

What it is: Permission to use Ultra Wideband (UWB), which the iPhone 11 and later support. Using UWB lets you measure the exact distance between your iPhone and other devices that support the technology. In particular, it’s used in Apple AirTag to find things you’ve tagged.

What the risks are: A malicious app with UWB access can determine your location extremely accurately, to an exact room in a house or even more precisely.

Where to configure it: Settings → Privacy → Nearby Interaction

Microphone

What it is: Permission to access your microphone.

What the risks are: With this permission, the app can record all conversations near the iPhone, such as in business meetings or at a medical appointment.

An orange dot in the upper right corner of the screen indicates when an app is using a microphone (the dot becomes red when you turn on the Increase Contrast accessibility feature).

When an app is using the microphone, iOS 15 shows an orange dot

When an app is using the microphone, iOS 15 shows an orange dot

Where to configure it: Settings → Privacy → Microphone

Speech Recognition

What it is: Permission to send voice-command recordings to Apple’s servers for recognition. An app needs this permission only if it uses Apple’s speech recognition service. If the app uses a third-party library for the same purpose, it will need another permission (Microphone) instead.

What the risks are: By and large, asking for this permission is indicative of an app developer’s honest intentions — by using Apple’s proprietary speech recognition service, they are following the company’s rules and recommendations. A malicious app is much more likely to ask for direct access to the microphone. Nevertheless, use caution granting permission for speech recognition.

Where to configure it: Settings → Privacy → Speech Recognition

Camera

What it is: Permission to take photos and videos, and to obtain metadata such as location and time.

What the risks are: An application can connect to the phone’s cameras at any time, even without your knowledge, and obtain access to photos’ metadata (the time and location where they were taken). Attackers can use this permission to spy on you.

If an application is currently accessing the camera, a green dot lights up in the upper right corner of the screen.

When an app is using the camera, iOS 15 shows a green dot

When an app is using the camera, iOS 15 shows a green dot

Where to configure it: Settings → Privacy → Camera

Health

What it is: Permission to access data you keep in the Health app, such as height, weight, age, and disease symptoms.

What the risks are: App developers may sell your health information to advertisers or insurance companies, which can tailor ads based on that data or use it to calculate health insurance rates.

Where to configure it: Settings → Privacy → Health

Research Sensor & Usage Data

What it is: Access to data from the phone’s built-in sensors, such as the light sensor, accelerometer, and gyroscope. Judging by indirect references in this document, that could also include data from the microphone and facial recognition sensor, as well as from iWatch sensors. The permission can also provide access to data about keyboard usage, the number of messages sent, incoming and outgoing calls, categories of apps used, websites visited, and more.

As you can see, this permission can provide a range of sensitive data about the device’s owner. Therefore, only apps designed for health and lifestyle research should request it.

What the risks are: The permission can allow outsiders to obtain information about you that is not available to ordinary apps. In particular, this data allows examination of your walking pattern, the position of your head while you’re looking at the screen, and collecting a lot of information about how you use your device.

Of course, you shouldn’t provide that much data about yourself to just anyone. Before agreeing to participate in a study and providing permission to the app in question, take a good look at what data the scientists are interested in, and how they plan to use it.

Where to configure it: Settings → Privacy → Research Sensor & Usage Data

HomeKit

What it is: The ability to control smart home devices.

What the risks are: With this level of access, an app can control smart home devices on your local network. For example, it can open smart door locks and blinds, turn music on and off, and control lights and security cameras. A random photo-filter app (for example) should not need this permission.

Where to configure it: Settings → Privacy → HomeKit

Media & Apple Music

What it is: Permission to access your media library in Apple Music and iCloud. Apps will receive information about your playlists and personal recommendations, and they will be able to play, add, and delete tracks from your music library.

What the risks are: If you don’t mind sharing your music preferences with the app, you probably have nothing to worry about, but be aware that this data may also be used for advertising purposes.

Where to configure it: Settings → Privacy → Media & Apple Music

Files and Folders

What it is: Permission to access documents stored in the Files app.

What the risks are: Apps can change, delete, even steal important documents stored in the Files app. If you’re using Files to store important data, keep access limited to the apps you truly trust.

Where to configure it: Settings → Privacy → Files and Folders

Motion & Fitness

What it is: Permission to access data about your workouts and daily physical activity, such as number of steps taken, calories burned, and so on.

What the risks are: Just like medical data from the Health app, activity data may be used by marketers to display targeted ads and by insurance companies to calculate health insurance costs.

Where to configure it: Settings → Privacy → Motion & Fitness

Focus

What it is: This permission allows apps to see if notifications on your smartphone are currently muted or enabled.

What the risks are: None.

Where to configure it: Settings → Privacy → Focus

Analytics & Improvements

What it is: Permission to collect and send data to Apple about how you use your device. It includes, for example, information about the country you live in and the apps you run. Apple uses the information to improve the operating system.

What the risks are: Your smartphone may use mobile data to send Apple data, potentially draining both the battery and your data plan a bit faster.

Where to configure it: Settings → Privacy → Analytics & Improvements

Apple Advertising

What it is: Permission to collect personal information such as your name, address, age, gender, and more, and use it to show targeted ads from Apple’s ad service — but not to share it with other companies. Disabling this permission will not eliminate ads, but without data collection they will be generic, not targeted.

What the risks are: As with any targeted ads, more effective advertising may lead to extra spending.

Where to configure it: Settings → Privacy → Apple Advertising

Record App Activity<

What it is: Permission to keep track of what data (location, microphone, camera, etc.) any given application accessed. At the time of this writing (using iOS 15.1), users may download the collected data as a file, albeit not a very informative one. Future versions of iOS (starting with 15.2, expected at the end of 2021) will use this data for the App Privacy Report, which is a bit like Screen Time, but for app tracking.

What’s useful: If you want to use the App Privacy Report as soon as iOS 15.2 becomes available, you may want to enable app activity logging in advance.

Where to configure it: Settings → Privacy → Record App Activity

Mobile Data

What it is: Permission to use mobile Internet. Applications need access to the Web to send messages, load photos and news feeds, and complete technical tasks such as sending bug reports.

What the risks are: Apps working in the background can quickly deplete data allowances. Users may prefer to deny mobile Internet access to apps that send a lot of data over the Web, instead limiting them to Wi-Fi use, especially when roaming. We strongly recommend users go through their app lists and disable unnecessary mobile data permissions before trips abroad.

Where to configure it: Settings → Mobile

Background App Refresh

What it is: Permission to refresh content when you are not using an app, that is, when it’s running in the background.

What the risks are: Updating content consumes data and battery power, but all modern smartphones are designed to run apps in the background. Take action only if you notice that a certain program is sending a lot of data over the Web and significantly reducing your smartphone’s runtime. You can check apps’ mobile data and power consumption in the system settings, under Mobile Data and Battery.

Where to configure it: Settings → General → Background App Refresh

Better safe than sorry

Protecting yourself from apps that are too greedy in collecting your personal information takes very little time. We strongly recommend taking that time, though, carefully considering all requests and being judicious about what you share and with whom. Remember that you are responsible for your privacy, so you can rest easy after denying any requests that seem suspicious or unreasonable, knowing your photos, videos, documents, and other data are safe.

]]>
full large medium thumbnail
Apple plans to use CSAM Detection to monitor users | Kaspersky official blog https://www.kaspersky.com/blog/what-is-apple-csam-detection/41502/ Mon, 30 Aug 2021 15:10:13 +0000 https://www.kaspersky.com/blog/?p=41502 In early August 2021, Apple unveiled its new system for identifying photos containing images of child abuse. Although Apple’s motives — combating the dissemination of child pornography — seem indisputably well-intentioned, the announcement immediately came under fire.

Apple has long cultivated an image of itself as a device maker that cares about user privacy. New features anticipated for iOS 15 and iPadOS 15 have already dealt a serious blow to that reputation, but the company is not backing down. Here’s what happened and how it will affect average users of iPhones and iPads.

What is CSAM Detection?

Apple’s plans are outlined on the company’s website. The company developed a system called CSAM Detection, which searches users’ devices for “child sexual abuse material,” also known as CSAM.

Although “child pornography” is synonymous with CSAM, the National Center for Missing and Exploited Children (NCMEC), which helps find and rescue missing and exploited children in the United States, considers “CSAM” the more appropriate term. NCMEC provides Apple and other technology firms with information on known CSAM images.

Apple introduced CSAM Detection along with several other features that expand parental controls on Apple mobile devices. For example, parents will receive a notification if someone sends their child a sexually explicit photo in Apple Messages.

The simultaneous unveiling of several technologies resulted in some confusion, and a lot of people got the sense that Apple was now going to monitor all users all the time. That’s not the case.

CSAM Detection rollout timeline

CSAM Detection will be part of the iOS 15 and iPadOS 15 mobile operating systems, which will become available to users of all current iPhones and iPads (iPhone 6S, fifth-generation iPad and later) this autumn. Although the function will theoretically be available on Apple mobile devices everywhere in the world, for now the system will work fully only in the United States.

How CSAM Detection will work

CSAM Detection works only in conjunction with iCloud Photos, which is the part of the iCloud service that uploads photos from a smartphone or tablet to Apple servers. It also makes them accessible on the user’s other devices.

If a user disables photo syncing in the settings, CSAM Detection stops working. Does that mean photos are compared with those in criminal databases only in the cloud? Not exactly. The system is deliberately complex; Apple is trying to guarantee a necessary level of privacy.

As Apple explains, CSAM Detection works by scanning photos on a device to determine whether they match photos in NCMEC’s or other similar organizations’ databases.

Simplified diagram of how CSAM Detection works

Simplified diagram of how CSAM Detection works. Source

The detection method uses NeuralHash technology, which in essence creates digital identifiers, or hashes, for photos based on their contents. If a hash matches one in the database of known child-exploitation images, then the image and its hash are uploaded to Apple’s servers. Apple performs another check before officially registering the image.

Another component of the system, cryptographic technology called private set intersection, encrypts the results of the CSAM Detection scan such that Apple can decrypt them only if a series of criteria are met. In theory, that should prevent the system from being misused — that is, it should prevent a company employee from abusing the system or handing over images at the request of government agencies.

In an August 13 interview with the Wall Street Journal, Craig Federighi, Apple’s senior vice president of software engineering, articulated the main safeguard for the private set intersection protocol: To alert Apple, 30 photos need to match images in the NCMEC database. As the diagram below shows, the private set intersection system will not allow the data set — information about the operation of CSAM Detection and the photos — to be decrypted until that threshold is reached. According to Apple, because the threshold for flagging an image is so high, a false match is very unlikely — a “one in a trillion chance.”

An important feature of CSAM Detection system: to decrypt data, a large number of photos need to match

An important feature of CSAM Detection system: to decrypt data, a large number of photos need to match. Source

What happens when the system is alerted? An Apple employee manually checks the data, confirms the presence of child pornography, and notifies authorities. For now the system will work fully only in the United States, so the notification will go to NCMEC, which is sponsored by the US Department of Justice.

Problems with CSAM Detection

Potential criticism of Apple’s actions falls into two categories: questioning the company’s approach and scrutinizing the protocol’s vulnerabilities. At the moment, there is little concrete evidence that Apple made a technical error (an issue we will discuss in more detail below), although there has been no shortage of general complaints.

For example, the Electronic Frontier Foundation has described these issues in great detail. According to the EFF, by adding image scanning on the user side, Apple is essentially embedding a back door in users’ devices. The EFF has criticized the concept since as early as 2019.

Why is that a bad thing? Well, consider having a device on which the data is completely encrypted (as Apple asserts) that then begins reporting to outsiders about that content. At the moment the target is child pornography, leading to a common refrain, “If you’re not doing anything wrong, you have nothing to worry about,” but as long as such a mechanism exists, we cannot know that it won’t be applied to other content.

Ultimately, that criticism is political more than technological. The problem lies in the absence of a social contract that balances security and privacy. All of us, from bureaucrats, device makers, and software developers to human-rights activists and rank-and-file users — are trying to define that balance now.

Law-enforcement agencies complain that widespread encryption complicates collecting evidence and catching criminals, and that is understandable. Concerns about mass digital surveillance are also obvious. Opinions, including opinions about Apple’s policies and actions, are a dime a dozen.

Potential problems with implementing CSAM Detection

Once we move past ethical concerns, we hit some bumpy technological roads. Any program code produces new vulnerabilities. Never mind what governments might do; what if a cybercriminal took advantage of CSAM Detection’s vulnerabilities? When it comes to data encryption, the concern is natural and valid: If you weaken information protection, even if it’s with only good intentions, then anyone can exploit the weakness for other purposes.

An independent audit of the CSAM Detection code has just begun and could take a very long time. However, we have already learned a few things.

First, code that makes it possible to compare photos against a “model” has existed in iOS (and macOS) since version 14.3. It is entirely possible that the code will be part of CSAM Detection. Utilities for experimenting with a search algorithm for matching images have already found some collisions. For example, according to Apple’s NeuralHash algorithm, the two images below have the same hash:

According to Apple's NeuralHash algorithm, these two photos match

According to Apple’s NeuralHash algorithm, these two photos match. Source

If it is possible to pull out the database of hashes of illegal photos, then it is possible to create “innocent” images that trigger an alert, meaning Apple could receive enough false alerts to make CSAM Detection unsustainable. That is most likely why Apple separated the detection, with part of the algorithm working only on the server end.

There is also this analysis of Apple’s private set intersection protocol. The complaint is essentially that even before reaching the alert threshold, the PSI system transfers quite a bit of information to Apple’s servers. The article describes a scenario in which law-enforcement agencies request the data from Apple, and it suggests that even false alerts might lead to a visit from the police.

For now, the above are just initial tests of an external review of CSAM Detection. Their success will depend largely on the famously secretive company providing transparency into CSAM Detection’s workings — and in particular, its source code.

What CSAM Detection means for the average user

Modern devices are so complex that it is no easy feat to determine how secure they really are — that is, to what extent they live up to the maker’s promises. All most of us can do is trust — or distrust — the company based on its reputation.

However, it is important to remember this key point: CSAM Detection operates only if users upload photos to iCloud. Apple’s decision was deliberate and anticipated some of the objections to the technology. If you do not upload photos to the cloud, nothing will be sent anywhere.

You may remember the notorious conflict between Apple and the FBI in 2016, when the FBI asked Apple for help unlocking an iPhone 5C that belonged to a mass shooter in San Bernardino, California. The FBI wanted Apple to write software that would let the FBI get around the phone’s password protection.

The company, recognizing that complying could result in unlocking not only the shooter’s phone but also anyone’s phone, refused. The FBI backed off and ended up hacking the device with outside help, exploiting the software’s vulnerabilities, and Apple maintained its reputation as a company that fights for its customers’ rights.

However, the story isn’t quite that simple. Apple did hand over a copy of the data from iCloud. In fact, the company has access to practically any user data uploaded to the cloud. Some, such as Keychain passwords and payment information, is stored using end-to-end encryption, but most information is encrypted only for protection from unsanctioned access — that is, from a hack of the company’s servers. That means the company can decrypt the data.

The implications make for perhaps the most interesting plot twist in the story of CSAM Detection. The company could, for example, simply scan all of the images in iCloud Photos (as Facebook, Google, and many other cloud service providers do). Apple created a more elegant mechanism that would help it repel accusations of mass user surveillance, but instead, it drew even more criticism — for scanning users’ devices.

Ultimately, the hullabaloo hardly changes anything for the average user. If you are worried about protecting your data, you should look at any cloud service with a critical eye. Data you store only on your device is still safe. Apple’s recent actions have sown well-founded doubts. Whether the company will continue in this vein remains an open question.

]]>
full large medium thumbnail