Technology – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Fri, 03 Nov 2023 14:49:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Technology – Kaspersky official blog https://www.kaspersky.com/blog 32 32 How to enable and configure passkeys for your Google account | Kaspersky official blog https://www.kaspersky.com/blog/how-to-set-up-passkeys-in-google-account/49515/ Fri, 03 Nov 2023 14:49:20 +0000 https://www.kaspersky.com/blog/?p=49515 Google recently announced that it’s planning on making so-called “passkeys” the default option for logging into Google accounts. So, the next time you sign in to YouTube, Gmail, Google Docs, Google Maps, or any other app from the search giant, you’ll most likely be prompted to create such a passkey.

In this post, we discuss where you can set up passkeys for your Google account, what options are available, and what to do if you encounter difficulties. But first, let’s talk about what this technology actually is and how it works.

What are passkeys?

Passkeys (a combination of “pass” + “key”) are developed by the FIDO Alliance, an organization with a mission to create new authentication standards that will eventually reduce humanity’s reliance on passwords. If you have a hardware access key — often called a YubiKey (as the most popular brand) — you’re already familiar with one of the FIDO Alliance’s developments.

Passkeys are the next stage in the evolution of new authentication technologies. Previous FIDO Alliance developments focused on additional authentication factors — secondary login verification options working in conjunction with universally hated passwords. Passkeys, on the other hand, are designed not to supplement but to entirely replace passwords.

The major tech giants — Apple, Google, and Microsoft — have already integrated support for this technology into their infrastructure and are ready to allow users to abandon passwords. In fact, Google is planning on encouraging users to do so in the near future.

Unfortunately, the FIDO Alliance didn’t provide a standard translation for the term “passkey” from English to any other language. Therefore, companies implementing this authentication mechanism can call it whatever they want, without much regard for their peers. А common term has not yet been chosen in French, Portuguese, or even Spanish.

Passkeys, AKA access keys or entry keys

How Apple, Google, and Microsoft name passkeys in different languages

How passkeys work and why all this is needed

Passkeys completely replace passwords, eliminating the need to create or remember sequences of characters.

Here’s how it works. When a user registers a passkey on a service, a pair of related encryption keys is created — a private key and a public key. This is called public-key cryptography. The basic idea is that if you encrypt something with the public key, it can only be decrypted with the private key.

So, the private key stays on the user’s device, while the public key is sent to the service. These two keys are then used to encrypt the dialog that occurs when a user logs in to the service:

  • The service sends the user a request encrypted with the public key, containing a very large random number.
  • The user’s device asks them to confirm that they are indeed the user. Usually, this is done through biometrics, like placing a finger on the sensor or looking into the camera, but a PIN code can also be used.
  • Upon successful confirmation, the user’s device decrypts the request from the service with the private key and retrieves the random number from it. Without the private key, nobody can decrypt this message correctly and obtain the secret number.
  • Based on this random number from the service’s request, the user’s device creates a digital signature with a certain algorithm — it calculates a new very large number — and sends it back to the service.
  • The service, on its end, performs the exact same calculations and compares the results. If the calculated number matches the one it received from the user’s device, the request was decrypted correctly. The user therefore possesses the corresponding private key, and they are must be authorized in the service.

As you can see, under the hood, this mechanism is quite complex. But the good news is that all the cryptographic magic is completely hidden from the user. In practice, it’s very simple: you just need to press the “Log in” button and place your finger on the sensor (or look into the camera). All the complicated work runs in the background on your smartphone or computer.

Why is this even necessary? Passkeys are an attempt to simultaneously strengthen security and simplify the user’s life. The former is achieved by replacing passwords, which are not so reliable, with extremely robust encryption keys. The latter is accomplished by eliminating the need for users to come up with something, remember it, and perform any additional actions for two-factor authentication.

Thus, passkeys are designed — in theory — to provide the highest level of security without requiring any effort from the user.

How to set up access to your Google account with a passkey instead of a password

Now let’s talk about how this all works in practice and how to set up access to your Google account using passkeys. It’s very straightforward. Here’s what you need to do:

  • Go to your Google account settings. You can do this through any Google service (such as Gmail) or directly through the Google Chrome browser, which you might already have. To do this, click on your avatar in the top right corner of the screen and select Manage your Google Account.

Where to find the passkey settings for your Google account, step 1

  • On the page that opens, select Security.

Where to find the passkey settings for your Google account, step 2

  • Scroll down to How you sign in to Google section.

Where to find the passkey settings for your Google account, step 3

  • Under the list of different sign-in verification and account-recovery options, find the Passkeys button and click on it.

Next, various options are possible, but for starters, I suggest creating a local passkey on your computer, so that you no longer need to enter a password to log in to your Google account in the browser. To do this:

  • Click on the blue Create a passkey button at the top of the screen.

Creating a local passkey for your Google account in a desktop browser, step 1

  • In the pop-up window, click Continue.

Creating a local passkey for your Google account in a desktop browser, step 2

  • After that, confirm the action using the method you use to unlock your device — in my case, it’s fingerprint recognition.

Creating a local passkey for your Google account in a desktop browser, step 3

  • Congrats! You’ve created a passkey and can now sign in to your Google account in this browser without a password.

Now let’s create another passkey on your smartphone. This allows you to sign in to Google without a password on this smartphone. And this same passkey can be used to sign in on other devices — via Bluetooth.

Before you begin, make sure Bluetooth is enabled on both your smartphone and computer, and grant the browser permission to access it (if this hasn’t been done already). Next, follow these steps:

  • Return to the Passkeys page and click the white Create a passkey button at the bottom of the screen.

Creating a passkey for your Google account on a smartphone, step 1

  • In the pop-up window, select Use another device.

Creating a passkey for your Google account on a smartphone, step 2

  • Another pop-up window will appear with a QR code — scan it with your smartphone’s camera.

Creating a passkey for your Google account on a smartphone, step 3

  • Then, confirm the creation of the passkey for your smartphone with the method you use to unlock it.
Confirming the passkey registration on iPhone

Confirming the passkey registration on iPhone. Source

That’s it! You’ve created a passkey on your smartphone as well. Using it, you can sign in to your Google account without a password on any device. It’s possible to create multiple passkeys — so if you have many devices, you can have a key for each.

A passkey for a Google account on a smartphone successfully created

Additionally, you can store passkeys using a hardware authenticator — also called security key or YubiKey, after the most well-known brand. However, not all hardware authenticators will work: you need a YubiKey with a built-in login confirmation mechanism — a PIN code or fingerprint. If you try to create a passkey on a YubiKey without such a mechanism, the registration will be successful, but when logging in, you’ll still be asked to enter the account password — defeating the whole purpose of the endeavor.

Not all YubiKeys can be used for storing passkeys

It’d be nice to receive this warning during the key registration process — not when you’re about to use it to log in to your account

Backup plan: passwords and one-time codes from the app

The login confirmation mechanism using passkeys is highly automated — with all the complicated procedures isolated from the user. So, as long as everything is working fine, logging in with passkeys is really convenient and easy. However, this isolation also has a downside: when something doesn’t work, it’s nearly impossible to understand what went wrong, why, and how to fix it.

For example, one of the passkeys I created flat-out refused to work for passwordless login. I couldn’t figure out the problem: in my Google account settings it was displayed as active, but it just… didn’t work. Fortunately, I had plenty of other access verification options enabled for that account.

Error logging in to Google account using passkey

Something went wrong. Thanks, Captain Google!

So, for now, I prefer to think of passkeys as a backup login option that can occasionally save time. But in my opinion, it’s too early to discount passwords and two-factor authentication for Google accounts. Something tells me they might still come in handy when the passkey suddenly doesn’t work. Most likely, that will happen at the worst possible time.

The good news is that since you’ll be entering your Google account password less frequently now, you won’t need to memorize it. Consequently, you can make the character combination as secure as possible — that is, very long and completely random, say, 32 or even 64 characters. And Kaspersky Password Manager can generate and remember it for you.

By the way, in the password manager, you can also receive one-time codes for two-factor authentication — this feature was recently added to Kaspersky Password Manager.

]]>
full large medium thumbnail
Where Linux is in your home, and how to protect Linux devices from hacking | Kaspersky official blog https://www.kaspersky.com/blog/linux-at-home-threats-and-protection/49105/ Wed, 27 Sep 2023 14:16:42 +0000 https://www.kaspersky.com/blog/?p=49105 Over the first 23 years of this century, the Linux operating system has become as ubiquitous as Windows. Although only 3% of people use it on their laptops and PCs, Linux dominates the Internet of Things, and is also the most popular server OS. You almost certainly have at least one Linux device at home — your Wi-Fi router. But it’s highly likely there are actually many more: Linux is often used in smart doorbells, security cameras, baby monitors, network-attached storage (NAS), TVs, and so on.

At the same time, Linux has always had a reputation of being a “trouble-free” OS that requires no special maintenance and is of no interest to hackers. Unfortunately, neither of these things is true of Linux anymore. So what are the threats faced by home Linux devices? Let’s consider three practical examples.

Router botnet

By running malware on a router, security camera, or some other device that’s always on and connected to the internet, attackers can exploit it for various cyberattacks. The use of such bots is very popular in DDoS attacks. A textbook case was the Mirai botnet, used to launch the largest DDoS attacks of the past decade.

Another popular use of infected routers is running a proxy server on them. Through such a proxy, criminals can access the internet using the victim’s IP address and cover their tracks.

Both of these services are constantly in demand in the cybercrime world, so botnet operators resell them to other cybercriminals.

NAS ransomware

Major cyberattacks on large companies with subsequent ransom demands — that is, ransomware attacks, have made us almost forget that this underground industry started with very small threats to individual users. Encrypting your computer and demanding a hundred dollars for decryption — remember that? In a slightly modified form, this threat re-emerged in 2021 and evolved in 2022 — but now hackers are targeting not laptops and desktops, but home file servers and NAS. At least twice, malware has attacked owners of QNAP NAS devices (Qlocker, Deadbolt). Devices from Synology, LG, and ZyXEL faced attacks as well. The scenario is the same in all cases: attackers hack publicly accessible network storage via the internet by brute-forcing passwords or exploiting vulnerabilities in its software. Then they run Linux malware that encrypts all the data and presents a ransom demand.

Spying on desktops

Owners of desktop or laptop computers running Ubuntu, Mint, or other Linux distributions should also be wary. “Desktop” malware for Linux has been around for a long time, and now you can even encounter it on official websites. Just recently, we discovered an attack in which some users of the Linux version of Free Download Manager (FDM) were being redirected to a malicious repository, where they downloaded a trojanized version of FDM onto their computers.

To pull off this trick, the attackers hacked into the FDM website and injected a script that randomly redirected some visitors to the official, “clean” version of FDM, and others to the infected one. The trojanized version deployed malware on the computer, stealing passwords and other sensitive information. There have been similar incidents in the past, for example, with Linux Mint images.

It’s important to note that vulnerabilities in Linux and popular Linux applications are regularly discovered (here’s a list just for the Linux kernel). Therefore, even correctly configured OS tools and access roles don’t provide complete protection against such attacks.

Basically, it’s no longer advisable to rely on widespread beliefs such as “Linux is less popular and not targeted”, “I don’t visit suspicious websites”, or “just don’t work as a root user”. Protection for Linux-based workstations must be as thorough as for Windows and MacOS ones.

How to protect Linux systems at home

Set a strong administrator password for your router, NAS, baby monitor, and home computers. The passwords for these devices must be unique. Brute forcing passwords and trying default factory passwords remain popular methods of attacking home Linux. It’s a good idea to store strong (long and complex) passwords in a password manager so you don’t have to type them in manually each time.

Update the firmware of your router, NAS, and other devices regularly. Look for an automatic update feature in the settings — that’s very handy here. These updates will protect against common attacks that exploit vulnerabilities in Linux devices.

Disable Web access to the control panel. Most routers and NAS devices allow you to restrict access to their control panel. Ensure your devices cannot be accessed from the internet and are only available from the home network.

Minimize unnecessary services. NAS devices, routers, and even smart doorbells function as miniature servers. They often include additional features like media hosting, FTP file access, printer connections for any home computer, and command-line control over SSH. Keep only the functions you actually use enabled.

Consider limiting cloud functionality. If you don’t use the cloud functions of your NAS (such as WD My Cloud) or can do without them, it’s best to disable them entirely and access your NAS only over your local home network. Not only will this prevent many cyberattacks, but it will also safeguard you against incidents on the manufacturer’s side.

Use specialized security tools. Depending on the device, the names and functions of available tools may vary. For Linux PCs and laptops, as well as some NAS devices, antivirus solutions are available, including regularly updated open-source options like ClamAV. There are also tools for more specific tasks, such as rootkit detection.

For desktop computers, consider switching to the Qubes operating system. It’s built entirely on the principles of containerization, allowing you to completely isolate applications from each other. Qubes containers are based on Fedora and Debian.

]]>
full large medium thumbnail
What is the Fediverse, and how does it work? | Kaspersky official blog https://www.kaspersky.com/blog/what-is-fediverse/48916/ Thu, 21 Sep 2023 11:06:46 +0000 https://www.kaspersky.com/blog/?p=48916 After Elon Musk “broke” his Twitter (now known as X) and Mark Zuckerberg released his Threads, there’s been a lot of talk on the internet about something called the Fediverse. Many see it as humanity’s last hope to escape the current social network mess.

In this post, we take look at what this Fediverse is, how it works, what it offers users right now, and what it may change in the near future.

What’s wrong with regular social networks?

Let’s start with why Fediverse is needed in the first place. The main problem with today’s social networks is that they’ve become too closed and self-absorbed (not to mention there are an awful lot of them). Often, you’re not even able to access a significant portion of a social network’s content if you’re not registered on it — and don’t even think about further interactions on the platform.

For example, to like a post on Twitter or leave a comment on a YouTube video, you have to be registered. When it comes to social networks that are part of Mark Zuckerberg’s empire, it’s even worse: without an account, you usually can’t even get acquainted with the content, let alone like it.

The second major problem with social networks is that they don’t really produce anything themselves. Users create all the content on social networks, which the massive and powerful corporations behind the networks then profit from. And, of course, corporations have absolutely no respect for their users’ privacy — collecting an incredible amount of data about them. This has already led to major scandals in the past, and will most likely result in a whole bunch of problems in the future if nothing changes drastically.

The way things are currently organized, there’s another significant risk associated with the complete lack of user control over the platforms that they are, in fact, creating. Let’s just imagine a huge social network, which just happened to play a significant role in global politics, being taken over by a person with rather peculiar views. Its users are left with no choice but to adapt — or look for another platform with a more reasonable owner.

The Fediverse is designed to solve all these problems of conventional social networks: excessive centralization, complete lack of accountability, content isolation, collection of user data, and violation of user privacy.

The theoretical side: what the Fediverse is, and how it works

The Fediverse (a combination of “federation” and “universe”) is an association of independent social networks, which allows users to interact with each other in much the same way as they would within a single platform. That is — read, subscribe/follow, like, share content, comment, and so on.

And each platform participating in the Fediverse is federated itself: it consists of a community of independent servers (referred to as “instances” within the Fediverse).

An essential feature of the Fediverse is therefore decentralization. Each instance within the Fediverse has its owners (who independently create and maintain the server and bear all expenses for its operation), its own user community, rules, moderation system, and often some sort of theme.

The specially designed ActivityPub protocol is used for interaction among all these independent instances. ActivityPub is developed by the organization that specializes in creating common protocols that the internet runs on — the World Wide Web Consortium (W3C).

The largest Mastodon instance

Mastodon.social is the largest instance of Mastodon, the largest social network in the Fediverse

Anyone can create their own instance within the Fediverse. All you have to do is:

  • Rent or set up a server at home;
  • Install the appropriate server software on it (usually open-source, free);
  • Connect to the internet;
  • Pay for the domain;
  • Create a community, and develop its rules, theme, and so on.


It’s important to note that a significant portion of the Fediverse, at least for now, runs on pure enthusiasm, and sometimes on donations from supporters or some occasional banners. There’s currently no sustainable commercial model here, and it seems that there is no intention to implement one yet.

How the Fediverse works for the average user

From an ordinary user’s perspective, they register on one of the servers that belong to a particular social network that’s part of the Fediverse. Then with this same account they can interact with users from any other servers within the Fediverse network, as if you can use a Twitter account to comment on a YouTube video or follow someone on Instagram. This removes the boundaries between different social networks, along with the need to create separate accounts in each of them.

However, in reality, it’s not as simple as it sounds: Fediverse instances are often quite closed communities, not particularly welcoming to outsiders, and registration can often be inaccessible. Logging into one social network with an account from another is usually not possible at all. Moreover, there’s no way to search across instances in the Fediverse.

So, basically, yes, you can indeed access the content of (almost) any Fediverse user without leaving the instance where you’re registered. You can probably even comment, like, or repost that user’s content, all while staying within the comfort and familiarity of your own instance. But there’s one catch — you need to know the address of that user. And knowing it isn’t so simple because, as mentioned above, there’s no search function in the Fediverse.

Pixelfed — A federated Instagram

Pixelfed — A federated alternative to Instagram

Explaining the Fediverse by analogy

Most people use the analogy of email to explain the Fediverse: it doesn’t matter which server you’re registered with, you can still send an email to anyone; for example, to your mom’s Gmail account from your work address at bigcorp.com. But personally, I think email is not the best analogy here — it’s too simple and uniform. In my opinion, it’s much better to describe the Fediverse in terms of the good old telephone system.

The global telephone system integrates a bunch of different technologies, from rotary dial phones connected to analog switching centers, to smartphones on the cutting-edge 5G network, and from virtual IP telephony numbers to satellite-link communication. For the end user, the technological solution underlying any particular network is completely unimportant. And there can be any number of these networks. They all support a single protocol for basic interaction, making them compatible with each other — you can call any number, whether it’s virtual or satellite.

Similarly, in the Fediverse, whether a platform is primarily text-based, video streaming, or graphic, it can participate in the project and its users can “call” other platforms.

One of the Pleroma instances

This is how one of the instances of the microblogging platform Pleroma looks. Source

However, the compatibility of telephone networks is far from complete. Each network may have its own special services and features — try sending an emoji to your great-grandmother’s landline phone. And on top of universal addressing (the international phone number format) there are often some local quirks: all those 0s or 00s instead of a normal country code, the possibility of not entering any codes at all when calling within a specific network (such as a city or office network), different formats for recording numbers (various dashes, brackets, and spaces, which can easily confuse people unfamiliar with local rules), and so on.

Again, the same goes for the Fediverse: while its platforms are generally connected and compatible at the top level, the user experience and functionality vary greatly from one platform to another. To figure out how to make long-distance calls perform a certain action on a given service, you often have to delve into the local specifics. It might actually be impossible to “call” certain instances because, while they formally support all the necessary technologies, they’ve decided to isolate themselves from the outside world for some reason.

In general, compared to email, the Fediverse is a much more diverse and less standardized collection of relatively unique instances. But despite this uniqueness, these instances do allow their users to interact with each other to some extent since they all support a common protocol.

Lemmy, the Fediverse Reddit analog

Lemmy — one of the Reddit analogs in the Fediverse

The practical side: which services are compatible with the Fediverse now, and which ones will be in the future

Now let’s turn to the practical side of the issue — what social networks are already operating within the Fediverse. Here’s a list of the most significant ones:

  • Mastodon — The largest and most popular social platform within the Fediverse, accounting for about half of its active users. It’s a microblogging social network — a direct Twitter analogue.
  • Misskey and Pleroma — Two other microblogging platforms that attract users with their atmosphere and cozy interface. Misskey was created in Japan, which has ensured its high popularity among fans of anime and related topics.
Misskey, the Japanese microblogging platform

Misskey — microblogging with a Japanese twist

  • PixelFed — A social networking platform for posting images. It’s a Fediverse version of Instagram but with a focus on landscape photography rather than glamorous golden poolside selfies.
  • PeerTube — A video streaming service. I’d like to say it’s the local equivalent of YouTube. However, since creating video content is so expensive, this analogy doesn’t completely hold up in reality.
  • Funkwhale — An audio streaming service. This can be considered a local version of Soundcloud or Spotify — with the same caveat as PeerTube.
  • Lemmy and Kbin — Social platforms for aggregating links and discussing them on forums. Sounds complicated, but they’re basically federated versions of Reddit.

Of course, these aren’t all the platforms within the Fediverse. You can find a more comprehensive list here.

A glimpse into the global future of the Fediverse

Another service worth mentioning that currently supports the ActivityPub protocol is the content management system WordPress. Some time ago an independent developer created a plugin for WordPress to ensure compatibility with this protocol.

Recently, Automattic, the company that owns both WordPress and Tumblr, acquired the plugin and hired its developer. Meanwhile, at the end of last year, Tumblr also announced future support for ActivityPub. Apparently, Automattic really believes in the potential of the Fediverse. Mozilla, Medium, and Flipboard are also now showing serious interest in the Fediverse.

But the most important — and quite unexpected — development for the federation of decentralized social networks was the promise made by Mark Zuckerberg’s company to add ActivityPub support to the recently launched social network Threads. It’s not yet been specified when exactly this will happen or in what form; however, if or when it does, several hundred million people from Threads/Instagram may suddenly join the existing few million Fediverse users.

What will this sudden popularity lead to? This isn’t such a simple question. Many long-time Fediverse users are visibly concerned about a possible invasion of “tourists”, and how these newcomers — accustomed to the noise of “big” social networks — will impact the communities that have been so carefully cultivated within the project.

How will the Fediverse cope with these sudden changes? Only time will tell. But one thing’s for sure: the further development and evolution of the Fediverse will be very interesting to watch…

]]>
full large medium thumbnail
Voice deepfakes: technology, prospects, scams | Kaspersky official blog https://www.kaspersky.com/blog/audio-deepfake-technology/48586/ Mon, 10 Jul 2023 14:35:54 +0000 https://www.kaspersky.com/blog/?p=48586 Have you ever wondered how we know who we’re talking to on the phone? It’s obviously more than just the name displayed on the screen. If we hear an unfamiliar voice when being called from a saved number, we know right away something’s wrong. To determine who we’re really talking to, we unconsciously note the timbre, manner and intonation of speech. But how reliable is our own hearing in the digital age of artificial intelligence? As the latest news shows, what we hear isn’t always worth trusting – because voices can be a fake: deepfake.

Help, I’m in trouble

In spring 2023, scammers in Arizona attempted to extort money from a woman over the phone. She heard the voice of her 15-year-old daughter begging for help before an unknown man grabbed the phone and demanded a ransom, all while her daughter’s screams could still be heard in the background. The mother was positive that the voice was really her child’s. Fortunately, she found out fast that everything was fine with her daughter, leading her to realize that she was a victim of scammers.

It can’t be 100% proven that the attackers used a deepfake to imitate the teenager’s voice. Maybe the scam was of a more traditional nature, with the call quality, unexpectedness of the situation, stress, and the mother’s imagination all playing their part to make her think she heard something she didn’t. But even if neural network technologies weren’t used in this case, deepfakes can and do indeed occur, and as their development continues they become increasingly convincing and more dangerous. To fight the exploitation of deepfake technology by criminals, we need to understand how it works.

What are deepfakes?

Deepfake (“deep learning” + “fake”) artificial intelligence has been growing at a rapid rate over the past few years. Machine learning can be used to create compelling fakes of images, video, or audio content. For example, neural networks can be used in photos and videos to replace one person’s face with another while preserving facial expressions and lighting. While initially these fakes were low quality and easy to spot, as the algorithms developed the results became so convincing that now it’s difficult to distinguish them from reality. In 2022, the world’s first deepfake TV show was released in Russia, where deepfakes of Jason Statham, Margot Robbie, Keanu Reeves and Robert Pattinson play the main characters.

Deepfake versions of Hollywood stars in the Russian TV series PMJason

Deepfake versions of Hollywood stars in the Russian TV series PMJason. (Source)

Voice conversion

But today our focus is on the technology used for creating voice deepfakes. This is also known as voice conversion (or “voice cloning” if you’re creating a full digital copy of it). Voice conversion is based on autoencoders – a type of neural network that first compresses input data (part of the encoder) into a compact internal representation, and then learns to decompress it back from this representation (part of the decoder) to restore the original data. This way the model learns to present data in a compressed format while highlighting the most important information.

Autoencoder scheme

Autoencoder scheme. (Source)

To make voice deepfakes, two audio recordings are fed into the model, with the voice from the second recording converted to the first. The content encoder is used to determine what was said from the first recording, and the speaker encoder is used to extract the main characteristics of the voice from the second recording – meaning how the second person talks. The compressed representations of what must be said and how it’s said are combined, and the result is generated using the decoder. Thus, what’s said in the first recording is voiced by the person from the second recording.

The process of making a voice deepfake

The process of making a voice deepfake. (Source)

There are other approaches that use autoencoders, for example those that use generative adversarial networks (GAN) or diffusion models. Research into how to make deepfakes is supported in particular by the film industry. Think about it: with audio and video deepfakes, it’s possible to replace the faces of actors in movies and TV shows, and dub movies with synchronized facial expressions into any language.

How it’s done

As we were researching deepfake technologies, we wondered how hard it might be to make one’s own voice deepfake? It turns out there are lots of free open-source tools for working with voice conversion, but it isn’t so easy to get a high-quality result with them. It takes Python programming experience and good processing skills, and even then the quality is far from ideal. In addition to open source, there are also proprietary and paid solutions available.

For example, in early 2023, Microsoft announced an algorithm that could reproduce a human voice based on an audio example that’s only three seconds long! This model also works with multiple languages, so you can even hear yourself speaking a foreign language. All this looks promising, but so far it’s only at the research stage. But the ElevenLabs platform lets users make voice deepfakes without any effort: just upload an audio recording of the voice and the words to be spoken, and that’s it. Of course, as soon as word got out, people started playing with this technology in all sorts of ways.

Hermione’s battle and an overly trusting bank

In full accordance with Godwin’s law, Emma Watson was made to read “Mein Kampf”, and another user used ElevenLabs technology to “hack” his own bank account. Sounds creepy? It does to us – especially when you add to the mix the popular horror stories about scammers collecting samples of voices over the phone by having folks say “yes” or “confirm” as they pretend to be a bank, government agency or poll service, and then steal money using voice authorization.

But in reality things aren’t so bad. Firstly, it takes about five minutes of audio recordings to create an artificial voice in ElevenLabs, so a simple “yes” isn’t enough. Secondly, banks also know about these scams, so voice can only be used to initiate certain operations that aren’t related to the transfer of funds (for example, to check your account balance). So money can’t be stolen this way.

To its credit, ElevenLabs reacted to the problem fast by rewriting the service rules, prohibiting free (i.e., anonymous) users to create deepfakes based on their own uploaded voices, and blocking accounts with complaints about “offensive content”.

While these measures may be useful, they still don’t solve the problem of using voice deepfakes for suspicious purposes.

How else deepfakes are used in scams

Deepfake technology in itself is harmless, but in the hands of scammers it can become a dangerous tool with lots of opportunities for deception, defamation or disinformation. Fortunately, there haven’t been any mass cases of scams involving voice alteration, but there have been several high-profile cases involving voice deepfakes.

In 2019, scammers used this technology to shake down UK-based energy firm. In a telephone conversation, the scammer pretended to be the chief executive of the firm’s German parent company, and requested the urgent transfer of €220,000 ($243,000) to the account of a certain supplier company. After the payment was made, the scammer called twice more – the first time to put the UK office staff at ease and report that the parent company had already sent a refund, and the second time to request another transfer. All three times the UK CEO was absolutely positive that he was talking with his boss because he recognized both his German accent and his tone and manner of speech. The second transfer wasn’t sent only because the scammer messed up and called from an Austrian number instead of a German one, which made the UK SEO suspicious.

A year later, in 2020, scammers used deepfakes to steal up to $35,000,000 from an unnamed Japanese company (the name of the company and total amount of stolen goods weren’t disclosed by the investigation).

It’s unknown which solutions (open source, paid, or even their own) the scammers used to fake voices, but in both the above cases the companies clearly suffered – badly – from deepfake fraud.

What’s next?

Opinions differ about the future of deepfakes. Currently, most of this technology is in the hands of large corporations, and its availability to the public is limited. But as the history of much more popular generative models like DALL-E, Midjourney and Stable Diffusion shows, and even more so with large language models (ChatGPT anybody?), similar technologies may well appear in the public domain in the foreseeable future. This is confirmed by a recent leak of internal Google correspondence in which representatives of the internet giant fear they’ll lose the AI race to open solutions. This will obviously result in an increase in the use of voice deepfakes – including for fraud.

The most promising step in the development of deepfakes is real-time generation, which will ensure the explosive growth of deepfakes (and fraud based on them). Can you imagine a video call with someone whose face and voice are completely fake? However, this level of data processing requires huge resources only available to large corporations, so the best technologies will remain private and fraudsters won’t be able to keep up with the pros. The high quality bar will also help users learn how to easily identify fakes.

How to protect yourself

Now back to our very first question: can we trust the voices we hear (that is – if they’re not the voices in our head)? Well, it’s probably overdoing it if we’re paranoid all the time and start coming up with secret code words to use with friends and family; however, in more serious situations such paranoia might be appropriate. If everything develops based on the pessimistic scenario, deepfake technology in the hands of scammers could grow into a formidable weapon in the future, but there’s still time to get ready and build reliable methods of protection against counterfeiting: there’s already a lot of research into deepfakes, and large companies are developing security solutions. In fact, we’ve already talked in detail about ways to combat video deepfakes here.

For now, protection against AI fakes is only just beginning, so it’s important to keep in mind that deepfakes are just another kind of advanced social engineering. The risk of encountering fraud like this is small, but it’s still there, so it’s worth knowing and keeping in mind. If you get a strange call, pay attention to the sound quality. Is it in an unnatural monotone, is it unintelligible, or are there strange noises? Always double-check information through other channels, and remember that surprise and panic are what scammers rely on most.

]]>
full large medium thumbnail
How to set up a VPN on a router | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-vpn-on-routers/48410/ Fri, 09 Jun 2023 11:17:22 +0000 https://www.kaspersky.com/blog/?p=48410 VPNs are getting more popular by the day: better privacy, access to the content you need, and other advantages have won over even those not much interested in technology. To enjoy these benefits on all home devices — including computers and smartphones, game consoles and smart TVs — the best way is to set up a VPN directly on your router (aka “Wi-Fi box”). That way, there’s no need to waste time configuring a VPN on each device separately, plus you get all the benefits even where VPN support is lacking, such as on a smart TV or game consoles. Sounds interesting? Then let’s get started!…

VPN requirements

To protect your entire home network with a VPN, both your VPN and your router need to support this option. The first thing to note is that most free VPNs don’t offer network protection at the router level. Nor will your VPN run on the router if the VPN exists only in the form of a browser add-on or mobile app. If you’re not sure whether your VPN supports router-based operation, read the manual or contact tech-support.

It’s important to find out the details from tech support, not just a “yes/no” answer. What specific VPN protocol can be used for the router (and the whole network)? Are all the VPN servers you need available using this protocol? Armed with this knowledge, next go to the technical support site for your particular router.

Router requirements

First of all, the router must support sending all home traffic through the VPN channel. These days even cheap models have this feature, but there are still cases when a router can’t work with a VPN, especially if it’s leased out by the internet service provider (ISP). What can also happen is that the VPN is already being used to create a channel from the router to the ISP and is a part of the standard home internet setup. This kind of “VPN service” usually doesn’t provide the core benefits that most users want.

You can check your router in three ways:

  1. Go to the web control panel (the address and password are usually shown on the underside of the router) and study the available settings
  2. Read the documentation on the router vendor’s website
  3. Contact the vendor’s technical support or — if you got the router from your provider — get in touch with its tech-support

If your ISP doesn’t offer VPN support, consider switching provider. If the problem lies with the router itself, check for an alternative firmware that has the functionality you need. The best known are DD-WRT and OpenWRT — the links point straight to a page where you can check your router’s compatibility. Replacing the router firmware can be technically challenging, so make sure you fully understand both the procedure and risks before starting.

After making sure that the router offers VPN support in the first place, next check which specific VPN protocols it can use. The most common are OpenVPN and WireGuard, with each having its own pros and cons.

OpenVPN has been around for a very long time and is widely supported by routers, but doesn’t usually provide maximum VPN speed, and also puts a heavy load on the router’s processor. For cheap routers with a weak processor, this can affect their performance and overall Wi-Fi speed in the home.

The newer WireGuard protocol is very fast and secure. If you have a really fast Internet connection, WireGuard will outperform OpenVPN in terms of speed and a lower load on the router’s processor. Among the disadvantages are the more involved initial setup (the user has to generate a pair of client keys) and fewer connection options: WireGuard binds the user to a specific server, OpenVPN — to a location, so the latter lets you switch to another server in the same location if the one previously used is down. Besides, not all routers recognize WireGuard.

And almost all routers support legacy L2TP/IPsec and PPTP protocols. We do not recommend them, because they fall short of the latest security standards and don’t encrypt traffic by default. However, if the two more modern options are not available, and a VPN is still needed, better to use L2TP/IPsec or PPTP with traffic encryption enabled than no VPN at all.

How to activate VPN on a router

The specifics differ from provider to provider and from router to router, so we can only describe the setup in general terms.

The first step is to download the right VPN profile from the VPN website. The profile is usually individual, so you need to go to your personal account on the website and find the page with VPN profiles. This might be a list of protected devices where you can add a router, or a special Add Router section, or a section for managing specific VPN protocols (OpenVPN, WireGuard) where you can generate the desired connection profile.

For example, for Kaspersky VPN Secure Connection, you can create a router profile on the My Kaspersky site in the Secure Connection section in three simple steps. Currently, only an OpenVPN profile is offered for routers, but by end of 2023 we plan to provide WireGuard support as well (note that WireGuard is now available in our VPN for Windows).

Creating an OpenVPN profile for a router on the My Kaspersky site.

Creating an OpenVPN profile for a router on the My Kaspersky site.

When adding a new profile in your personal account, you need to answer certain questions. These include the profile name, your choice of server, and so on. The same window often provides space for technical details — such as private keys, names and passwords — but most providers support automatic generation of all this, in which case they can be left blank. Next, a link appears to download the .ovpn file for OpenVPN or .conf file for WireGuard.

For L2TP and PPTP, you don’t need to download anything. Instead, you need to write down some information from your personal account:

  • server address for connection
  • username and password
  • an additional encryption key (pre-shared key, PSK, secret key)
  • authentication type (PAP, CHAP)

Having gotten hold of this information, go to the web control panel of the router. Depending on the vendor’s… imagination, you may have to wander through a maze of subsections to get to the VPN properties:

  • Asus routers usually have a VPN → VPN client section
  • Keenetic routers hide VPN connections under Internet → Other Connections
  • in Netgear routers, go to Advanced Setup → VPN service
  • in TP-Link routers, open the Network → WAN tab

Take care, because routers can show VPN connections in two forms: as an external VPN connection to your home network (here the router acts as a VPN server and provides secure external access to your local network) and as a secure connection to a remote VPN server (here the router becomes a VPN client that connects securely to the VPN service). You need the second option.

Having found the right section, create a new connection and name it (say, for the VPN service and/or the location of the server), then enter the information retrieved from your personal account with the VPN provider.

For PPTP and L2TP/IPSEC, all information is required, including server addresses. For OpenVPN and WireGuard, attaching the OVPN/CONF profile file is usually enough, but sometimes you might also need to specify a username and password.

For some router models (for example, Keenetic), instead of a profile upload button, there’s a window for entering the VPN configuration; in this case, open the OVPN/CONF file in a text editor (yes, it’s a plain text file, and you can change its extension to .txt if you like), copy all the information from it, and paste it into this window. If you have any doubts about the correct settings, take a look at the router’s setup help pages — they’re usually found right in the Settings window.

Setting up a VPN connection via OpenVPN in Keenetic routers.

Setting up a VPN connection via OpenVPN in Keenetic routers.

Then click the Save button and look for the Activate button or On/Off switch for the VPN connection. That done, the VPN should in theory be on all the time and even activate itself automatically after a router restart. It’s a good idea to check this by going to a site like whatismyipaddress.com or iplocation.net on any home device: they’ll show you which region of the online world you’ve tunneled through to. That’s the VPN setup basically done — all devices connected to the router will now access the internet through an encrypted connection. And some routers even allow you to choose which home devices will connect directly to the internet and which will go through a VPN.

If for some reason a VPN can’t be set up on your router, you can protect your internet access by setting up secure DNS on your router. This won’t give you all the benefits of a secure VPN connection, but it can give you some — such as protecting kids from inappropriate content and blocking ads on all devices.

For maximum protection on up to 10 of your family’s devices, we recommend a Kaspersky Premium subscription, which, alongside protection against viruses, hacking, phishing, and data leaks, includes a fast and unlimited Kaspersky VPN Secure Connection, secure password manager and vault, a one-year free Kaspersky Safe Kids subscription, and many other benefits.

]]>
full large medium thumbnail
AI government regulation: why and how | Kaspersky official blog https://www.kaspersky.com/blog/ai-government-regulation/48220/ Thu, 18 May 2023 13:22:44 +0000 https://www.kaspersky.com/blog/?p=48220 I’m a bit tired by now of all the AI news, but I guess I’ll have to put up with it a bit longer, for it’s sure to continue to be talked about non-stop for at least another year or two. Not that AI will then stop developing, of course; it’s just that journalists, bloggers, TikTokers, Tweeters and other talking heads out there will eventually tire of the topic. But for now their zeal is fueled not only by the tech giants, but governments as well: the UK’s planning on introducing three-way AI regulation; China’s put draft AI legislation up for a public debate; the U.S. is calling for “algorithmic accountability“; the EU is discussing but not yet passing draft laws on AI, and so on and so forth. Lots of plans for the future, but, to date, the creation and use of AI systems haven’t been limited in any way whatsoever; however, it looks like that’s going to change soon.

Plainly a debatable matter is, of course, the following: do we need government regulation of AI at all? If we do — why, and what should it look like?

What to regulate

What is artificial intelligence? (No) thanks to marketing departments, the term’s been used for lots of things — from the cutting-edge generative models like GPT-4, to the simplest machine-learning systems, including some that have been around for decades. Remember Т9 on push-button cellphones? Heard about automatic spam and malicious file classification? Do you check out film recommendations on Netflix? All of those familiar technologies are based on machine learning (ML) algorithms, aka “AI”.

Here at Kaspersky, we’ve been using such technologies in our products for close on 20 years, always preferring to modestly refer to them as “machine learning” — if only because “artificial intelligence” seems to call to most everyone’s mind things like talking supercomputers on spaceships and other stuff straight out of science fiction. However, such talking-thinking computers and droids need to be fully capable of human-like thinking — to command artificial general intelligence (AGI) or artificial superintelligence (ASI), yet neither AGI nor ASI have been invented yet, and will hardly be so in the foreseeable future.

Anyway, if all the AI types are measured with the same yardstick and fully regulated, the whole IT industry and many related ones aren’t going to fare well at all. For example, if we (Kaspersky) will ever be required to get the consent from all our training-set “authors”, we, as an information security company, will find ourselves up against the wall. We learn from malware and spam, and feed the knowledge gained into our machine learning, while their authors tend to prefer to withhold their contact data (who knew?!). Moreover, considering that data has been collected and our algorithms have been trained for nearly 20 years now —  quite how far into the past would we be expected to go?

Therefore, it’s essential for lawmakers to listen, not to marketing folks, but to machine-learning/AI industry experts and discuss potential regulation in a specific and focused manner: for example, possibly using multi-function systems trained on large volumes of open data, or high responsibility and risk level decision-making systems.

And new AI applications will necessitate frequent revisions of regulations as they arise.

Why regulate?

To be honest, I don’t believe in a superintelligence-assisted Judgement Day within the next hundred years. But I do believe in a whole bunch of headaches from thoughtless use of the computer black box.

As a reminder to those who haven’t read our articles on both the splendor and misery of machine learning, there are three main issues regarding any AI:

  • It’s not clear just how good the training data used for it were/are.
  • It’s not clear at all what AI has succeeded in “comprehending” out of that stock of data, or how it makes its decisions.
  • And most importantly — the algorithm can be misused by its developers and its users alike.

Thus, anything at all could happen: from malicious misuse of AI, to unthinking compliance with AI decisions. Graphic real-life examples: fatal autopilot errors, deepfakes (1, 2, 3) by now habitual in memes and even the news, a silly error in school teacher contracting, the police apprehending a shoplifter but the wrong one, and a misogynous AI recruiting tool. Besides, any AI can be attacked with the help of custom-made hostile data samples: vehicles can be tricked using stickers, one can extract personal information from GPT-3, and anti-virus or EDR can be deceived too. And by the way, attacks on combat-drone AI described in science fiction don’t appear all that far-fetched any more.

In a nutshell, the use of AI hasn’t given rise to any truly massive problems yet, but there is clearly a lot of potential for them. Therefore, the priorities of regulation should be clear:

  1. Preventing critical infrastructure incidents (factories/ships/power transmission lines/nuclear power plants).
  2. Minimizing physical threats (driverless vehicles, misdiagnosing illnesses).
  3. Minimizing personal damage and business risks (arrests or hirings based on skull measurements, miscalculation of demand/procurements, and so on).

The objective of regulation should be to compel users and AI vendors to take care not to increase the risks of the mentioned negative things happening. And the more serious the risk, the more actively it should be compelled.

There’s another concern often aired regarding AI: the need for observance of moral and ethical norms, and to cater to psychological comfort, so to say. To this end, we see warnings given so folks know that they’re viewing a non-existent (AI-drawn) object or communicating with a robot and not a human, and also notices informing that copyright was respected during AI training, and so on. And why? So lawmakers and AI vendors aren’t targeted by angry mobs! And this is a very real concern in some parts of the world (recall protests against Uber, for instance).

How to regulate

The simplest way to regulate AI would to prohibit everything, but it looks like this approach isn’t on the table yet. And anyway it’s not much easier to prohibit AI than it is computers. Therefore, all reasonable regulation attempts should follow the principle of “the greater the risk, the stricter the requirements”.

The machine-learning models that are used for something rather trivial — like retail buyer recommendations — can go unregulated, but the more sophisticated the model — or the more sensitive the application area — the more drastic can be the requirements for system vendors and users. For example:

  • Submitting a model’s code or training dataset for inspection to regulators or experts.
  • Proving the robustness of a training dataset, including in terms of bias, copyright and so forth.
  • Proving the reasonableness of the AI “output”; for example, its being free of hallucinations.
  • Labelling AI operations and results.
  • Updating a model and training dataset; for example, screening out folks of a given skin color from the source data, or suppressing chemical formulas for explosives in the model’s output.
  • Testing AI for “hostile data”, and updating its behavior as necessary.
  • Controlling who’s using specific AI and why. Denying specific types of use.
  • Training large AI, or that which applies to a particular area, only with the permission of the regulator.
  • Proving that it’s safe to use AI to address a particular problem. This approach is very exotic for IT, but more than familiar to, for example, pharmaceutical companies, aircraft manufacturers and many other industries where safety is paramount. First would come five years of thorough tests, then the regulator’s permission, and only then a product could be released for general use.

The last measure appears excessively strict, but only until you learn about incidents in which AI messed up treatment priorities for acute asthma and pneumonia patients and tried to send them home instead of to an intensive care unit.

The enforcement measures may range from fines for violations of AI rules (along the lines of European penalties for GDPR violations) to licensing of AI-related activities and criminal sanctions for breaches of legislation (as proposed in China).

But what’s the right way?

Below represent my own personal opinions — but they’re based on 30 years of active pursuit of advanced technological development in the cybersecurity industry: from machine learning to “secure-by-design” systems.

First, we do need regulation. Without it, AI will end up resembling highways without traffic rules. Or, more relevantly, resembling the online personal data collection situation in the late 2000s, when nearly everyone would collect all they could lay their hands on. Above all, regulation promotes self-discipline in the market players.

Second, we need to maximize international harmonization and cooperation in regulation — the same way as with technical standards in mobile communications, the internet and so on. Sounds utopian given the modern geopolitical reality, but that doesn’t make it any less desirable.

Third, regulation needn’t be too strict: it would be short-sighted to strangle a dynamic young industry like this one with overregulation. That said, we need a mechanism for frequent revisions of the rules to stay abreast of technology and market developments.

Fourth, the rules, risk levels, and levels of protection measures should be defined in consultation with a great many relevantly-experienced experts.

Fifth, we don’t have to wait ten years. I’ve been banging on about the serious risks inherent in the Internet of Things and about vulnerabilities in industrial equipment for over a decade already, while documents like the EU Cyber Resilience Act first appeared (as drafts!) only last year.

But that’s all for now folks! And well done to those of your who’ve read this to the end — thank you all! And here’s to an interesting – safe – AI-enhanced future!…

]]>
full large medium thumbnail
How the A-GPS in your smartphone works, and whether Qualcomm is tracking you | Kaspersky official blog https://www.kaspersky.com/blog/gps-agps-supl-tracking-protection/48175/ Tue, 16 May 2023 07:47:07 +0000 https://www.kaspersky.com/blog/?p=48175 News that Qualcomm, a leading vendor of smartphone chips, tracked users with its geolocation service caused a minor stir in the tech press recently. In this post we’ll separate the truth from the nonsense in that story, and discuss how you can actually minimize undesired geolocation tracking. First things first, let’s look at how geopositioning actually works.

How mobile devices determine your location

The traditional geolocation method is to receive a satellite signal from GPS, GLONASS, Galileo, or Beidou systems. Using this data, the receiver (the chip in the smartphone or navigation device) performs calculations and pins down its location. This is a fairly accurate method that doesn’t involve the transmission of any information by the device — only reception. But there are significant drawbacks to this geolocation method: it doesn’t work indoors, and it takes a long time if the receiver isn’t used daily. This is because the device needs to know the exact location of the satellites to be able to perform the calculation, so it has to download the so-called almanac, which contains information about satellite positions and movement, and this takes between five and ten minutes to retrieve if downloading directly from satellite.

As a much quicker alternative to downloading directly from satellite, devices can download the almanac from the internet within seconds via a technology called A-GPS (Assisted GPS). As per the original specification, only actual satellite data available at the moment is transmitted, but several developers have added a weekly forecast of satellite positions to speed up the calculation of coordinates even if the receiver has no internet connection for days to come. The technology is known as the Predicted Satellite Data Service (PSDS), and the aforementioned Qualcomm service is the most impressive implementation to date. Launched in 2007, the service was named “gpsOne XTRA”, renamed to “IZat XTRA Assistance” in 2013, and in its most recent incarnation rebranded again as the “Qualcomm GNSS Assistance Service”.

How satellite signal reception works indoors and what SUPL is

As mentioned above, another problem with geopositioning using a satellite signal is that it may not be available indoors, so there are other ways of determining the location of a smartphone. The classic method from the nineties is to check which cellular base stations can be received at the current spot and to calculate the approximate location of the device by comparing their signal strength knowing the exact position of the stations.

With minor modifications, this is supported by modern LTE networks as well. Smartphones are also able to check for nearby Wi-Fi hotspots and determine their approximate location. This is typically enabled by centralized databases storing information about Wi-Fi access points and provided by specific services, such as Google Location Service.
All existing geopositioning methods are defined by the SUPL (Secure User Plane Location), a standard supported by mobile operators and smartphone, microchip and operating system developers. Any application that needs to know the user’s location gets it from the mobile operating system using the fastest and most accurate combination of methods currently available.

No privacy guaranteed

Accessing SUPL services doesn’t have to result in a breach of user privacy, but in practice, data does often get leaked. When your phone determines your location using nearby cellular base stations, the mobile operator knows exactly which subscriber sent the request and where they were at that moment. Google monetizes its Location Services by recording the user’s location and identifier; however, technically this is unnecessary.

As for A-GPS, servers can, in theory, provide the required data without collecting subscribers’ identifiers at all or storing any of their data. However, many developers do both. Android’s standard implementation of the SUPL sends the smartphone IMSI (unique SIM number) as part of a SUPL request. The Qualcomm XTRA client on the smartphone transmits subscribers “technical identifiers”, including IP addresses. According to Qualcomm, they “de-identify” the data; that is, they delete records linking subscriber identifiers and IP addresses after 90 days, and then use it exclusively for certain “business purposes”.

One important point: data from an A-GPS request cannot be used for pinning down the user’s location. The almanac available from the server is the same anywhere on Earth — it’s the user’s device that calculates the location. In other words, all that the owners of these services could store is information about a user sending a request to the server at a certain time, but not the user’s location.

The accusations against Qualcomm

Publications criticizing Qualcomm are citing research by a certain someone who goes by the name Paul Privacy published on the Nitrokey website. The paper maintains that smartphones with Qualcomm chips send users’ personal data to the company’s servers via an unencrypted HTTP protocol without their knowledge. This allegedly takes place without anyone controlling it, as the feature is implemented at hardware level.

Despite the aforementioned data privacy issues that the likes of the Qualcomm GNSS Assistance Service suffer from, the research somewhat spooks and misleads users, while it contains a number of inaccuracies:

  • In old smartphones, information indeed could have been transmitted over insecure HTTP, but in 2016 Qualcomm fixed that XTRA vulnerability.
  • According to the license agreement, information such as a list of installed applications can be transmitted via the XTRA services, but practical tests (packet inspection and studying the Android source code) showed no proof of this actually happening.
  • Contrary to the researchers’ initial allegations, the data-sharing function is not embedded in the microchip (baseband) but implemented at OS level, so it certainly can be controlled: by the OS developers and by the modding community as well. Replacing and deactivating specific SUPL services on a smartphone has been a known skill since 2012, but this was done to make GPS work faster rather than for privacy reasons.

Spying protection: for everyone and for the extra cautious

So, Qualcomm (probably) does not track us. That said, tracking via geolocation is possible, but on a whole different level: weather apps and other seemingly harmless programs you use on day-to-day basis do it systematically. What we suggest everyone should do is one simple yet important thing: minimize the number of apps that have access to your location. After all, you can choose a place manually to get a weather forecast, and entering a delivery address when shopping online is not that big a deal.

Those of you who want to prevent their location from being logged anywhere should take several extra protective steps:

  • Disable every geolocation service apart from the good old GPS on your smartphone.
  • Use advanced tools to block your phone from accessing SUPL services. Depending on the smartphone model and operating system type, this can be done by filtering the DNS server, a system firewall, a filtering router, or dedicated smartphone settings.
  • It’s best to avoid using cellphones… altogether! Even if you do all of the above, the mobile operator still knows your approximate location at any time.
]]>
full large medium thumbnail
Neural networks reveal the images used to train them | Kaspersky official blog https://www.kaspersky.com/blog/neural-networks-data-leaks/47992/ Mon, 24 Apr 2023 14:43:02 +0000 https://www.kaspersky.com/blog/?p=47992 Your (neural) networks are leaking

Researchers at universities in the U.S. and Switzerland, in collaboration with Google and DeepMind, have published a paper showing how data can leak from image-generation systems that use the machine-learning algorithms DALL-E, Imagen or Stable Diffusion. All of them work the same way on the user side: you type in a specific text query — for example, “an armchair in the shape of an avocado” — and get a generated image in return.

Image generated by the Dall-E neural network

Image generated by the Dall-E neural network. Source.

All these systems are trained on a vast number (tens or hundreds of thousands) of images with pre-prepared descriptions. The idea behind such neural networks is that, by consuming a huge amount of training data, they can create new, unique images. However, the main takeaway of the new study is that these images are not always so unique. In some cases it’s possible to force the neural network to reproduce almost exactly an original image previously used for training. And that means that neural networks can unwittingly reveal private information.

Image generated by the Stable Diffusion neural network (right) and the original image from the training set (left)

Image generated by the Stable Diffusion neural network (right) and the original image from the training set (left). Source.

More data for the “data god”

The output of a machine-learning system in response to a query can seem like magic to a non-specialist: “woah – it’s like an all-knowing robot!”! But there’s no magic really…

All neural networks work more or less in the same way: an algorithm is created that’s trained on a data set — for example a series of pictures of cats and dogs — with a description of what exactly is depicted in each image. After the training stage, the algorithm is shown a new image and asked to work out whether it’s a cat or a dog. From these humble beginnings, the developers of such systems moved on to a more complex scenario: the algorithm trained on lots of pictures of cats creates an image of a pet that never existed on demand. Such experiments are carried out not only with images, but also with text, video and even voice: we’ve already written about the problem of deepfakes (whereby digitally altered videos of (mostly) politicians or celebrities seem to say stuff they never actually did).

For all neural networks, the starting point is a set of training data: neural networks cannot invent new entities from nothing. To create an image of a cat, the algorithm must study thousands of real photographs or drawings of these animals. There are plenty of arguments for keeping these data sets confidential. Some of them are in the public domain; other data sets are the intellectual property of the developer company that invested considerable time and effort into creating them in the hope of achieving a competitive advantage. Still others, by definition, constitute sensitive information. For example, experiments are underway to use neural networks to diagnose diseases based on X-rays and other medical scans. This means that the algorithmic training data contains the actual health data of real people, which, for obvious reasons, must not fall into the wrong hands.

Diffuse it

Although machine-learning algorithms look the same to the outsider, they are in fact different. In their paper, the researchers pay special attention to machine-learning diffusion models. They work like this: the training data (again images of people, cars, houses, etc.) is distorted by adding noise. And the neural network is then trained to restore such images to their original state. This method makes it possible to generate images of decent quality, but a potential drawback (in comparison with algorithms in generative adversarial networks, for example) is their greater tendency to leak data.

The original data can be extracted from them in at least three different ways: First, using specific queries, you can force the neural network to output — not something unique, generated based on thousands of pictures — but a specific source image. Second, the original image can be reconstructed even if only a part of it is available. Third, it’s possible to simply establish whether or not a particular image is contained within the training data.

Very often, neural networks are… lazy, and instead of a new image, they produce something from the training set if it contains multiple duplicates of the same picture. Besides the above example with the Ann Graham Lotz photo, the study gives quite a few other similar results:

Odd rows: the original images. Even rows: images generated by Stable Diffusion v1.4

Odd rows: the original images. Even rows: images generated by Stable Diffusion v1.4. Source.

If an image is duplicated in the training set more than a hundred times, there’s a very high chance of its leaking in its near-original form. However, the researchers demonstrated ways to retrieve training images that only appeared once in the original set. This method is far less efficient: out of five hundred tested images, the algorithm randomly recreated only three of them. The most artistic method of attacking a neural network involves recreating a source image using just a fragment of it as input.

The researchers asked the neural network to complete the picture, after having deleted part of it. Doing this can be used to determine fairly accurately whether a particular image was in the training set. If it was, the machine-learning algorithm generated an almost exact copy of the original photo or drawing

The researchers asked the neural network to complete the picture, after having deleted part of it. Doing this can be used to determine fairly accurately whether a particular image was in the training set. If it was, the machine-learning algorithm generated an almost exact copy of the original photo or drawing. Source.

At this point, let’s divert our attention to the issue of neural networks and copyright.

Who stole from whom?

In January 2023, three artists sued the creators of image-generating services that used machine-learning algorithms. They claimed (justifiably) that the developers of the neural networks had trained them on images collected online without any respect for copyright. A neural network can indeed copy the style of a particular artist, and thus deprive them of income. The paper hints that in some cases algorithms can, for various reasons, engage in outright plagiarism, generating drawings, photographs and other images that are almost identical to the work of real people.

The study makes recommendations for strengthening the privacy of the original training set:

  • Get rid of duplicates.
  • Reprocess training images, for example by adding noise or changing the brightness; this makes data leakage less likely.
  • Test the algorithm with special training images, then check that it doesn’t inadvertently reproduce them accurately.

What next?

The ethics and legality of generative art certainly make for an interesting debate — one in which a balance must be sought between artists and the developers of the technology. On the one hand, copyright must be respected. On the other, is computer art so different from human? In both cases, the creators draw inspiration from the works of colleagues and competitors.

But let’s get back down to earth and talk about security. The paper provides a specific set of facts about only one machine-learning model. Extending the concept to all similar algorithms, we arrive at an interesting situation. It’s not hard to imagine a scenario whereby a smart assistant of a mobile operator hands out sensitive corporate information in response to a user query: after all, it was in the training data. Or, for example, a cunning query tricks a public neural network into generating a copy of someone’s passport. The researchers stress that such problems remain theoretical for the time being.

But other problems are already with us. As we speak, the text-generating neural network ChatGPT is being used to write real malicious code that (sometimes) works. And GitHub Copilot is helping programmers write code using a huge amount of open-source software as input. And the tool doesn’t always respect the copyright and privacy of the authors whose code ended up in the sprawling set of training data. As neural networks evolve, so too will the attacks on them — with consequences that no one yet fully understands.

]]>
full large medium thumbnail
Risks associated with smart locks | Kaspersky official blog https://www.kaspersky.com/blog/3-reasons-not-to-use-smart-locks/47866/ Fri, 14 Apr 2023 11:49:47 +0000 https://www.kaspersky.com/blog/?p=47866 Smart locks can be really handy. There are plenty of them on the market and lots of different types to choose from. Some are able to detect when the owner (or, rather — their smartphone) is approaching, and open without a key. Others are controlled remotely, allowing you to open the door to friends or relatives without being home. Still others also provide video surveillance: someone rings the doorbell, and you immediately see on your smartphone who it is.

However, smart devices carry risks that users of traditional, offline locks never have to worry about. A careful study of these risks reveals a full three reasons to stick to the old way. Let’s take a look at them…

First reason: smart locks are physically more vulnerable than normal locks

The problem here is that smart locks combine two different concepts. In theory, these locks should have a reliable smart component, while at the same time provide robust protection against physical tampering so they can’t be opened with, say, a screwdriver or penknife. Combining these two concepts doesn’t always work: the result is usually either a flimsy smart lock, or a heavy-duty iron lock with vulnerable software.

We’ve already talked about some particularly egregious examples of locks incapable of doing their jobs in another post. They include a cool padlock with a fingerprint scanner — under which there happens to be an opening mechanism potentially accessible to anyone (a lever). Plus a smart lock for bicycles — which can be taken apart with a screwdriver.

Example of a physically vulnerable smart lock

The top panel with the fingerprint scanner is easy to remove with a knife. The opening mechanism is accessible under the panel. Source.

Second reason: issues with the “smart” component

Making the “smart” component secure enough is also not easy. It’s important to remember that developers of such devices often prioritize functionality over protection. The most recent example is the Akuvox E11, a device designed not for the home use, but for offices. The Akuvox E11 is a smart intercom with a terminal for receiving a video stream from the built-in camera, plus a button to open the door. And, as it’s a smart device, you can control it via the smartphone app.

Akuvox E11 smart intercom

The Akuvox E11 lock has multiple vulnerabilities, allowing unauthorized access to the given premises without any problems. Source.

The software has been implemented in such a way that anyone can gain access to both video and sound from the camera at any time. And if you’ve not thought about isolating the web interface from the internet, anyone will be able to control the lock and open the door. This is a textbook example of insecure software development: video requests miss authorization checks; part of the web interface is accessible without a password; and the password itself is easy to crack due to encryption with a fixed key that’s the same for all devices.

Want more examples? Here you go… This article talks about a lock that allows nearby intruders to get your Wi-Fi network password. Here, a smart lock protects data transfer poorly: an attacker can eavesdrop on the radio channel and seize control. And here is another example of a poorly secured web interface.

Third reason: the software needs to be updated regularly

A typical smartphone receives updates for two or three years after its release. As for low-budget IoT devices, support may be withheld even earlier. Updating a smart device via the internet is fairly straightforward. However, maintaining support for devices requires resources and money on the part of the vendor.

This in itself can be a problem, such as when the vendor disables the cloud infrastructure and the device stops working. But even if smart-lock functionality is preserved, vulnerabilities that were unknown to the vendor at the time of release could yet appear.

For example, in 2022, researchers discovered a vulnerability in the Bluetooth Low Energy protocol, which many companies have adopted as the standard for contactless authentication when unlocking various devices (including smart locks). This vulnerability opens the door (so to speak) to so-called relay attacks, which require the attacker to be close to the smart-lock owner and use special (but relatively inexpensive) equipment. Armed with this hardware, the attacker can relay signals between the victim’s smartphone and the smart lock. This tricks the smart lock into thinking that the owner’s smartphone is nearby (and not in a shopping mall three miles away), whereupon it unlocks the door.

Example of a relay attack-vulnerable smart lock

A Kwikset lock vulnerable to a relay attack using a bug in the Bluetooth Low Energy protocol. Source.

Since smart-lock software is highly complex, the probability of its containing serious vulnerabilities is never zero. If one is discovered, the vendor should promptly release an update and send it to all sold devices. But what if the model was discontinued or is no longer supported?

With smartphones, we solve this problem by buying a new device every two to three years. How often do you plan to replace a door lock connected to the internet? We generally expect such devices to last for decades, not a couple of years (until the vendor pulls support or goes bust).

So, what to do?

It should be understood that all locks (not only smart ones) can be cracked. However, when deciding to install a smart device instead of a standard lock, think carefully: do you really need to be able to open the door from your smartphone? If you answer yes to this question, at least consider the following points:

  • Look for information about the particular device before purchasing.
  • Read not only reviews about convenience and features of the smart lock, but also reports of potential problems and risks.
  • Go for a newer device: chances are the vendor will maintain support for it a little longer.
  • Once you’ve bought a device, study its networking features and think carefully about whether you need them; it would make sense to disable any that could be dangerous.
  • Don’t forget to protect your computers, especially if they’re on the same network as the smart lock. It would a double-shame if a malware infection on your computer were to also cause your home’s doors to be flung open.
]]>
full large medium thumbnail
Smart device vulnerabilities and securing against them | Kaspersky official blog https://www.kaspersky.com/blog/how-to-secure-smart-home/47472/ Mon, 13 Mar 2023 06:09:11 +0000 https://www.kaspersky.com/blog/?p=47472 Intelligent features and internet connectivity are built into most television sets, baby monitors, and many other digital devices these days. Regardless of whether you use these smart features or not, smart devices produce security risks that you should know about and take steps to protect yourself against, while if you’re using plenty of the features of your smart home, securing its components is all the more critical. We’ve already published a separate article on planning a smart home, so here we’ll be focusing on security.

The biggest smart home risks

Networked home appliances produce several, essentially different types of risks:

  • The devices share lots of data with the vendor on a regular basis. For example, your smart television is capable of identifying the content you’re watching — even if it’s on a flash drive or external player. Certain vendors make big bucks by spying on their customers. Even less sophisticated appliances, such as smart washing machines, collect and share data with their vendors.
  • If your smart device is protected with a weak password, and still runs on its factory settings, which no one has changed, or contains operating system vulnerabilities, hackers can hijack the device. The consequences of this vary by device type. A smart washing machine can be shut down in the middle of a wash cycle as a kind of prank, whereas baby monitors can be abused for spying on the inhabitants of the house and even scaring them. A fully-featured smart home is susceptible to scenarios that are downright nasty — such as a blackout or heating shutdown.
  • A hijacked smart device can be infected with malicious code and used for launching cyberattacks both on computers on the home network and devices on the broader Web. Powerful DDoS attacks are known to have been launched entirely from infected surveillance cameras. The owner of the infected gadget risks seeing their internet connection choked and getting onto various blacklists.
  • If the level of security implemented by the vendor is insufficient, the data sent by the device can be found and published. Surveillance and peephole camera footage is sometimes stored in poorly protected cloud environments — free for anyone to watch.

Luckily for you, none of these horrors has to befall you — the risks can be significantly lessened.

What if you don’t need your home to be smart

An unutilized smart home is a fairly common situation. According to appliance vendor statistics, half of all IoT devices never see a network connection. The owners use them in the old-fashioned non-smart mode, without management via a mobile app or any of the other twenty-first-century luxuries. However, even a non-configured device like that produces security risks. It’s quite likely that it exposes a freely accessible, unsecured Wi-Fi access point or tries to connect to nearby phones via Bluetooth every now and then. In that case, someone, such as your neighbors, could assume control. Therefore, the minimum you need to do to “dumb down” your smart home appliances is review the user manual, open the settings, and turn off both Wi-Fi and Bluetooth connectivity.

There are devices that won’t let you do this or will turn Wi-Fi back on after a power interruption. This can be fixed with a trick that’s a bit challenging but gets the job done: changing your home Wi-Fi password temporarily, connecting the misbehaving device, and then changing the password again. The device will keep trying to connect using the invalid password, but it will be impossible to hack it by abusing the default settings.

General advice

Regardless of whether your smart home is centrally managed or composed of mismatched devices not connected to one another, they still need basic security.

  • Make sure your Wi-Fi router is secured. Remember that your router is a part of the smart home system too. We’ve published several detailed guides to securing a home Wi-Fi system and configuring a router properly. The only thing we’d like to add is that home-router firmware is often found to contain vulnerabilities that are exploited for attacking home networks, so the set-and-forget approach doesn’t work here. Firmware updates need to be checked on a regular basis. Quality routers let you update their firmware right from the web interface management panel. If that’s not the case for you, visit the vendor’s website or contact your internet service provider to obtain a newer version of the firmware and follow the appropriate guide to install it. To wrap up this router adventure, check that the ability to manage the router from outside the home network is disabled in the settings. ISP employees may need it for troubleshooting sometimes, but it’s often turned on when it’s not needed, thus increasing cybersecurity risks.
  • Check your network regularly to make sure there are no unauthorized devices connected to it. The handiest way to do this is by using a dedicated app. Kaspersky Premium can display a list of all devices connected to the network, and often also their vendors and protection status where available. It’s important that you keep track of your devices and remove extraneous ones, such as a refrigerator, which has no real need for a Wi-Fi connection — or a neighbor who hooked up to free Wi-Fi.
  • Consider vendor reputation when purchasing a gadget. Every vendor suffers from vulnerabilities and defects, but while some are quick to fix their bugs and release updates, others will keep denying there’s a problem for as long as they can. According to a Kaspersky survey, 34% of users believe that choosing a trusted vendor is all that it takes to have a secure smart home. While that certainly lowers the risks, staying secure still requires other steps as well.

What if your smart home is built on Wi-Fi?

Do you have a bunch of smart devices that aren’t connected to one another, or are joined up with the help of Amazon Alexa or Apple Homekit? In that case, each device independently connects to the internet through Wi-Fi. This is the most complex scenario from a security standpoint, as the passwords, firmware, and vulnerabilities need to be tracked for each device individually. Unfortunately, setup details vary greatly between device types and vendors, so we have to limit ourselves to general recommendations.

  • Set up a guest Wi-Fi network. Professionals call this “network segmentation”. Ideally, your home network should be split into three segments: home computers, guest devices, and smart home appliances. Many routers are not capable of such miracles, but you should at least have two segments: one for home devices and one for guests. This will keep visitors from reconfiguring your cameras and starting up the robot vacuum just for fun. It goes without saying that the segments must be secured with different Wi-Fi passwords, and the guest segment should have stricter security settings — such as client isolation, bandwidth limits, and so on. Confining IoT devices to a separate segment reduces associated risks. A hacker wouldn’t be able to attack a home computer from a hijacked IP camera. The reverse is true as well: an infected home computer wouldn’t be able to access a video camera. Open the router’s web-based management interface and review the Wi-Fi settings to follow this tip. If some of your appliances are connected via a cable, make sure that they’re located in the correct network segments by checking the other sections of the router settings.
  • Set strong passwords. Open the settings for each device. This can sometimes be done though an official mobile app, and sometimes through a web interface. Set a long, unique password for each device by following the user manual. You can’t use the same password for all devices! To keep your ducks in a row, use a password manager. By the way, one is included with Kaspersky Premium, and it’s also available as a standalone app.
  • Update the firmware. Do this for each of your devices that support firmware updates via an app or web interface, and repeat regularly.
  • Check the online service settings. The same device may be able to operate in different modes — sending different amounts of information via the internet. For example, a robot vacuum cleaner may be allowed to upload a detailed cleanup pattern to the server — which means a map of your home — or it may not. A video peephole may be allowed to save to the server each photo or video of a visitor approaching your door that it identifies using a motion sensor, or it may just be allowed to display these when you press the button. Keep from overloading the vendor cloud storage with unneeded information: disable unused features. And it’s better not to send to the server something that can be excluded from sharing without compromising the utility of the device.
  • Follow updates on the vendors of devices you use. Sometimes, IoT devices are found to contain critical vulnerabilities or other issues, and their owners need to take action: update the firmware, enable or disable a certain feature, reset the password, delete an old cloud backup… Conscientious vendors typically maintain a section on their website where they publish security recommendations and newsletters, but these are often written in complex language and contain information on many devices that aren’t relevant to you. Therefore, it’s better to check for news about your devices from time to time and visit the official website if you find something alarming.

What if your smart home is centrally managed?

If your smart home is a centralized system, with most of the devices controlled by a hub, this makes the owner’s task somewhat easier. All of the above steps, such as setting a strong password, regularly updating the firmware and so on, mostly need to be performed on one device: the smart home controller. Enable two-factor authentication on the controller if possible.

We also recommend limiting internet access on the controller, for example by restricting data sharing with any computer except for vendor servers and devices on the home network. This can be done in the home-router settings. Some controllers are capable of functioning without any internet connection at all. If managing your smart home remotely isn’t critical for you, disconnecting the hub from the internet is a powerful security measure. This is no cure-all, as complex, multi-stage attacks will remain a threat, but at least the most common-or-garden attacks will be prevented.

]]>
full large medium thumbnail