Stan Kaminsky – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Fri, 01 Mar 2024 11:45:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Stan Kaminsky – Kaspersky official blog https://www.kaspersky.com/blog 32 32 How to store Location History in Android in 2024? | Kaspersky official blog https://www.kaspersky.com/blog/google-location-history-security-2024/50725/ Fri, 01 Mar 2024 11:45:46 +0000 https://www.kaspersky.com/blog/?p=50725 Of all the accusations routinely hurled Google’s way, there’s one that especially alarms users: the company can track the location of all Android — and to some extent, Apple — phones. Past experience suggests that Google indeed does this — not only using this data to display ads, but also storing it in Location History and even providing it to law enforcement agencies. Now Google promises to only store Location History on the device. Should we believe it?

What’s wrong with Location History?

Location History lets you easily view the places a user visited and when they did so. You can use it for all kinds of things: remembering the name of that beach or restaurant you went to while on vacation two years ago, finding the address of a place your better half often goes to after work, getting new bar suggestions based on the ones you’ve been to, locating the florist that delivered the surprise bouquet for a party, and many more. The different ways this feature both benefits and harms Google account holders are commonly reported. Little wonder then that many — even those with a clean consciences — often want to turn it off completely.

Regrettably, Google has often been caught abusing its Location History setting. Even if explicitly disabled, Location History was still collected under “Web & App Activity”. This led to a series of lawsuits, which Google lost. In 2023, the company was ordered to pay $93 million under one suit, and a year earlier $392 million under another. These fines were but a pinprick to a corporation with hundreds of billions of dollars in revenue, but at least the court had Google revise its location tracking practices.

The combined legal and public pressure apparently led to the company announcing at the end of 2023 a drastic change: now, according to Google, Location History will be collected and stored on users’ devices only. But does that make the feature any more secure?

How does Location History (supposedly) work in 2024?

First of all, check that the feature has been updated on your device. As is wont with Google, updates for the billions of Android devices roll out in waves, and to relatively recent OS versions only. So, unless you see an alert that looks like the one below, it’s likely your device hasn’t received the update, and enabling Location History will save the data on Google’s servers.

Unless Google has explicitly warned you that your Location History will be stored on your device, it's likely to continue being saved to Google's servers

Unless Google has explicitly warned you that your Location History will be stored on your device, it’s likely to continue being saved to Google’s servers

If your Location History is now stored locally, however, Google Maps will offer options for centralized management of your “places”. By selecting a point on the map, such as a coffee shop, and opening its description, you’ll see all the times you visited the place in the past, all searches for the place on the map, and other things like that. One tap on the location card can delete all of your activity associated with the place.

Google says it will store the history for each place for three months by default and then delete it. To change this setting or disable history, simply tap the blue dot on the map that shows your current location and turn off Location History in the window that pops up.

Options for configuring and disabling Location History

Options for configuring and disabling Location History

An obvious downside to offline Location History is that it won’t be accessible to the user on their other devices. As a workaround, Google suggests storing an encrypted backup on its servers.

Keep in mind that what we’re discussing here is the new implementation of Location History as described by Google. Detailed analysis of how this new pattern actually works may reveal pitfalls and caveats that no one except Google’s developers knows about at this point.

What threats does this update eliminate?

Although the new storage method improves the privacy of location data, it can’t be considered a one-size-fits-all solution to all existing issues. So how does it affect various hypothetical threat scenarios?

  • Tracking you to customize ads. This is unlikely to be affected in any way: Google can continue to collect data on places you visit in an anonymized, generalized form. You’ll keep seeing ads linked to your current or past locations unless you disable either that or all targeted ads entirely. Remember that Google isn’t the only one out there tracking your location. Other apps and services have been found guilty of abusing this data as well; here are a few examples: one, two, and three.
  • Evil hackers and cyberspies. These malicious groups typically use commercial spyware (stalkerware) or malicious implants, so the changes to Google’s Location History will hardly affect them.
  • Jealous partner or prying relative. It’ll be harder to use a computer on which you’re signed in to your Google account to track your location. Someone could still quietly snoop on your phone while it’s unlocked, as well as secretly install commercial spyware such as stalkerware, which we mentioned above. Therefore, it’s general steps to protect smartphones from mobile spyware, not the updates to Google Maps, that are crucial to addressing this.
  • Law enforcement. This isn’t likely to change much, as, in addition to asking Google, the police can request your location data from the mobile carrier or deduce it from surveillance camera footage, which is both easier and faster.

So, the update doesn’t help user privacy all that much, does it? We’re afraid not.

How do I effectively protect my location data?

You’re limited to fairly drastic options these days if you want to prevent location tracking. We list these here in ascending order of extremity.

  • Use comprehensive security on all your devices, including phones and tablets. This will reduce the likelihood of being exposed to malware, including stalkerware.
  • Disable Google Location History and Web & App Activity, avoid giving location permissions to any apps except navigation apps, turn off personalized ads, and use a DNS service that filters ads.
  • Turn off all geo-tracking features (GPS, Google location services, and others) on your smartphone.
  • When on an especially important trip, activate flight mode for an hour or two, or just turn off your smartphone.
  • Ditch smartphones in favor of the most basic dumbphones.
  • Ultimately, stop carrying around any kind of phone at all.
  • Live 100% off-grid; e.g., in a cave.
]]>
full large medium thumbnail
How to run language models and other AI tools locally on your computer | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-ai-locally-and-securely/50576/ Fri, 16 Feb 2024 11:08:41 +0000 https://www.kaspersky.com/blog/?p=50576 Many people are already experimenting with generative neural networks and finding regular use for them, including at work. For example, ChatGPT and its analogs are regularly used by almost 60% of Americans (and not always with permission from management). However, all the data involved in such operations — both user prompts and model responses — are stored on servers of OpenAI, Google, and the rest. For tasks where such information leakage is unacceptable, you don’t need to abandon AI completely — you just need to invest a little effort (and perhaps money) to run the neural network locally on your own computer – even a laptop.

Cloud threats

The most popular AI assistants run on the cloud infrastructure of large companies. It’s efficient and fast, but your data processed by the model may be accessible to both the AI service provider and completely unrelated parties, as happened last year with ChatGPT.

Such incidents present varying levels of threat depending on what these AI assistants are used for. If you’re generating cute illustrations for some fairy tales you’ve written, or asking ChatGPT to create an itinerary for your upcoming weekend city break, it’s unlikely that a leak will lead to serious damage. However, if your conversation with a chatbot contains confidential info — personal data, passwords, or bank card numbers — a possible leak to the cloud is no longer acceptable. Thankfully, it’s relatively easy to prevent by pre-filtering the data — we’ve written a separate post about that.

However, in cases where either all the correspondence is confidential (for example, medical or financial information), or the reliability of pre-filtering is questionable (you need to process large volumes of data that no one will preview and filter), there’s only one solution: move the processing from the cloud to a local computer. Of course, running your own version of ChatGPT or Midjourney offline is unlikely to be successful, but other neural networks working locally provide comparable quality with less computational load.

What hardware do you need to run a neural network?

You’ve probably heard that working with neural networks requires super-powerful graphics cards, but in practice this isn’t always the case. Different AI models, depending on their specifics, may be demanding on such computer components as RAM, video memory, drive, and CPU (here, not only the processing speed is important, but also the processor’s support for certain vector instructions). The ability to load the model depends on the amount of RAM, and the size of the “context window” — that is, the memory of the previous conversation — depends on the amount of video memory. Typically, with a weak graphics card and CPU, generation occurs at a snail’s pace (one to two words per second for text models), so a computer with such a minimal setup is only appropriate for getting acquainted with a particular model and evaluating its basic suitability. For full-fledged everyday use, you’ll need to increase the RAM, upgrade the graphics card, or choose a faster AI model.

As a starting point, you can try working with computers that were considered relatively powerful back in 2017: processors no lower than Core i7 with support for AVX2 instructions, 16GB of RAM, and graphics cards with at least 4GB of memory. For Mac enthusiasts, models running on the Apple M1 chip and above will do, while the memory requirements are the same.

When choosing an AI model, you should first familiarize yourself with its system requirements. A search query like “model_name requirements” will help you assess whether it’s worth downloading this model given your available hardware. There are detailed studies available on the impact of memory size, CPU, and GPU on the performance of different models; for example, this one.

Good news for those who don’t have access to powerful hardware — there are simplified AI models that can perform practical tasks even on old hardware. Even if your graphics card is very basic and weak, it’s possible to run models and launch environments using only the CPU. Depending on your tasks, these can even work acceptably well.

GPU throughput tests

Examples of how various computer builds work with popular language models

Choosing an AI model and the magic of quantization

A wide range of language models are available today, but many of them have limited practical applications. Nevertheless, there are easy-to-use and publicly available AI tools that are well-suited for specific tasks, be they generating text (for example, Mistral 7B), or creating code snippets (for example, Code Llama 13B). Therefore, when selecting a model, narrow down the choice to a few suitable candidates, and then make sure that your computer has the necessary resources to run them.

In any neural network, most of the memory strain is courtesy of weights — numerical coefficients describing the operation of each neuron in the network. Initially, when training the model, the weights are computed and stored as high-precision fractional numbers. However, it turns out that rounding the weights in the trained model allows the AI tool to be run on regular computers while only slightly decreasing the performance. This rounding process is called quantization, and with its help the model’s size can be reduced considerably — instead of 16 bits, each weight might use eight, four, or even two bits.

According to current research, a larger model with more parameters and quantization can sometimes give better results than a model with precise weight storage but fewer parameters.

Armed with this knowledge, you’re now ready to explore the treasure trove of open-source language models, namely the top Open LLM leaderboard. In this list, AI tools are sorted by several generation quality metrics, and filters make it easy to exclude models that are too large, too small, or too accurate.

List of language models sorted by filter set

List of language models sorted by filter set

After reading the model description and making sure it’s potentially a fit for your needs, test its performance in the cloud using Hugging Face or Google Colab services. This way, you can avoid downloading models which produce unsatisfactory results, saving you time. Once you’re satisfied with the initial test of the model, it’s time to see how it works locally!

Required software

Most of the open-source models are published on Hugging Face, but simply downloading them to your computer isn’t enough. To run them, you have to install specialized software, such as LLaMA.cpp, or — even easier — its “wrapper”, LM Studio. The latter allows you to select your desired model directly from the application, download it, and run it in a dialog box.

Another “out-of-the-box” way to use a chatbot locally is GPT4All. Here, the choice is limited to about a dozen language models, but most of them will run even on a computer with just 8GB of memory and a basic graphics card.

If generation is too slow, then you may need a model with coarser quantization (two bits instead of four). If generation is interrupted or execution errors occur, the problem is often insufficient memory — it’s worth looking for a model with fewer parameters or, again, with coarser quantization.

Many models on Hugging Face have already been quantized to varying degrees of precision, but if no one has quantized the model you want with the desired precision, you can do it yourself using GPTQ.

This week, another promising tool was released to public beta: Chat With RTX from NVIDIA. The manufacturer of the most sought-after AI chips has released a local chatbot capable of summarizing the content of YouTube videos, processing sets of documents, and much more — provided the user has a Windows PC with 16GB of memory and an NVIDIA RTX 30th or 40th series graphics card with 8GB or more of video memory. “Under the hood” are the same varieties of Mistral and Llama 2 from Hugging Face. Of course, powerful graphics cards can improve generation performance, but according to the feedback from the first testers, the existing beta is quite cumbersome (about 40GB) and difficult to install. However, NVIDIA’s Chat With RTX could become a very useful local AI assistant in the future.

The code for the game "Snake", written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The code for the game “Snake”, written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The applications listed above perform all computations locally, don’t send data to servers, and can run offline so you can safely share confidential information with them. However, to fully protect yourself against leaks, you need to ensure not only the security of the language model but also that of your computer – and that’s where our comprehensive security solution comes in. As confirmed in independent tests, Kaspersky Premium has practically no impact on your computer’s performance — an important advantage when working with local AI models.

]]>
full large medium thumbnail
Secure AI usage both at home and at work | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-chatgpt-ai-assistants-securely-2024/50562/ Wed, 14 Feb 2024 11:44:17 +0000 https://www.kaspersky.com/blog/?p=50562 Last year’s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio — AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account/device hacking through various AI tools — let alone create proper safeguards against a futuristic “evil AI”. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you’ll have to pick up a few skills and help yourself.

So, how do you use AI without regretting it later?

Filter important data

The privacy policy of OpenAI, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product “improvements”.

Most other popular language models — be it Google’s Gemini, Anthropic’s Claude, or Microsoft’s Bing and Copilot — have similar policies: they can all save dialogs in their entirety.

That said, inadvertent chat leaks have already occurred due to software bugs, with users seeing other people’s conversations instead of their own. The use of this data for training could also lead to a data leak from a pre-trained model: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (one, two, three) aimed at stealing dialogs, and they’re unlikely to stop there.

So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.

Don’t send any personal data to a chatbot. No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or “REDACTED” in your request.

Don’t upload any documents. Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team’s payroll. Or, worse than that, when processing documents received from external sources, you might be targeted with an attack that counts on the document being scanned by a language model.

Use privacy settings. Carefully review your large-language-model (LLM) vendor’s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

Sending code? Clean up any confidential data. This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.

Limit the use of third-party applications and plug-ins

Follow the above tips every time — no matter what popular AI assistant you’re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Gemini extensions, or separate add-on applications gives rise to new types of threats.

First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.

Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this attack that creates new GitHub repositories on behalf of the user.

Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high — all the more so because it seems no one really checks the creators or their contacts.

How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren’t smooth enough, and the creators themselves don’t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.

Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and “system prompt” templates that customize the assistant for a specific task (“Act as a high-school physics teacher…”). These wrappers certainly aren’t worth trusting with your data, as you can accomplish the task just fine without them.

If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.

  • Choose extensions and add-ons that have been around for at least several months and are being updated regularly.
  • Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.
  • If the plug-in comes with a privacy policy, read it carefully before you start using the extension.
  • Opt for open-source tools.
  • If you possess even rudimentary coding skills — or coder friends — skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.

Execution plug-ins call for special monitoring

So far, we’ve been discussing risks relating to data leaks; but this isn’t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user’s command — such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions — for example, by buying tickets with the victim’s money. This type of attack is referred to as prompt injection, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it — and perhaps never will.

Luckily, most significant actions — especially those involving payment transactions such as purchasing tickets — require a double confirmation. However, interactions between language models and plug-ins create an attack surface so large that it’s difficult to guarantee consistent results from these measures.

Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.

]]>
full large medium thumbnail
Cyberthreats to marketing | Kaspersky official blog https://www.kaspersky.com/blog/cyberattacks-on-your-marketing/50571/ Tue, 13 Feb 2024 19:12:22 +0000 https://www.kaspersky.com/blog/?p=50571 When it comes to attacks on businesses, the focus is usually on four aspects: finance, intellectual property, personal data, and IT infrastructure. However, we mustn’t forget that cybercriminals can also target company assets managed by PR and marketing — including e-mailouts, advertising platforms, social media channels, and promotional sites. At first glance, these may seem unattractive to the bad guys (“where’s the revenue?”), but in practice each can serve cybercriminals in their own “marketing activities”.

Malvertising

To the great surprise of many (even InfoSec experts), cybercriminals have been making active use of legitimate paid advertising for a number of years now. In one way or another they pay for banner ads and search placements, and employ corporate promotion tools. There are many examples of this phenomenon, which goes by the name of malvertising (malicious advertising). Usually, cybercriminals advertise fake pages of popular apps, fake promo campaigns of famous brands, and other fraudulent schemes aimed at a wide audience. Sometimes threat actors create an advertising account of their own and pay for advertising, but this method leaves too much of a trail (such as payment details). So a different method is more attractive to them: stealing login credentials and hacking the advertising account of a straight-arrow company, then promoting their sites through it. This has a double payoff for the cybercriminals: they get to spend others’ money without leaving excess traces. But the victim company, besides a gutted advertising account, gets one problem after another — including potentially being blocked by the advertising platform for distributing malicious content.

Downvoted and unfollowed

A variation of the above scheme is a takeover of social networks’ paid advertising accounts. The specifics of social media platforms create additional troubles for the target company.

First, access to corporate social media accounts is usually tied to employees’ personal accounts. It’s often enough for attackers to compromise an advertiser’s personal computer or steal their social network password to gain access not only to likes and cat pics but to the scope of action granted by the company they work for. That includes posting on the company’s social network page, sending emails to customers through the built-in communication mechanism, and placing paid advertising. Revoking these functions from a compromised employee is easy as long as they aren’t the main administrator of the corporate page — in which case, restoring access will be labor-intensive in the extreme.

Second, most advertising on social networks takes the form of “promoted posts” created on behalf of a particular company. If an attacker posts and promotes a fraudulent offer, the audience immediately sees who published it and can voice their complaints directly under the post. In this case, the company will suffer not just financial but visible reputational damage.

Third, on social networks many companies save “custom audiences” — ready-made collections of customers interested in various products and services or who have previously visited the company’s website. Although these usually can’t be pulled (that is, stolen) from a social network, unfortunately it’s possible to create malvertising on their basis that’s adapted to a specific audience and is thus more effective.

Unscheduled circular

Another effective way for cybercriminals to get free advertising is to hijack an account on an email service provider. If the attacked company is large enough, it may have millions of subscribers in its mailing list.

This access can be exploited in a number of ways: by mailing an irresistible fake offer to email addresses in the subscriber database; by covertly substituting links in planned advertising emails; or by simply downloading the subscriber database in order to send them phishing emails in other ways later on.

Again, the damage suffered is financial, reputational, and technical. By “technical” we mean the blocking of future incoming messages by mail servers. In other words, after the malicious mailouts, the victim company will have to resolve matters not only with the mailing platform but also potentially with specific email providers that have blocked you as a source of fraudulent correspondents.

A very nasty side effect of such an attack is the leakage of customers’ personal data. This is an incident in its own right — capable of inflicting not only reputational damage but also landing you with a fine from data protection regulators.

Fifty shades of website

A website hack can go unnoticed for a long time — especially for a small company that does business primarily through social networks or offline. From the cybercriminals’ point of view, the goals of a website hack vary depending on the type of site and the nature of the company’s business. Leaving aside cases when website compromise is part of a more sophisticated cyberattack, we can generally delineate the following varieties.

First, threat actors can install a web skimmer on an e-commerce site. This is a small, well-disguised piece of JavaScript embedded directly in the website code that steals card details when customers pay for a purchase. The customer doesn’t need to download or run anything — they simply pay for goods or services on the site, and the attackers skim off the money.

Second, attackers can create hidden subsections on the site and fill them with malicious content of their choosing. Such pages can be used for a wide variety of criminal activity, be it fake giveaways, fake sales, or distributing Trojanized software. Using a legitimate website for these purposes is ideal, just as long as the owners don’t notice that they have “guests”. There is, in fact, a whole industry centered around this practice. Especially popular are unattended sites created for some marketing campaign or one-time event and then forgotten about.

The damage to a company from a website hack is broad-ranging, and includes: increased site-related costs due to malicious traffic; a decrease in the number of real visitors due to a drop in the site’s SEO ranking; potential wrangles with customers or law enforcement over unexpected charges to customers’ cards.

Hotwired web forms

Even without hacking a company’s website, threat actors can use it for their own purposes. All they need is a website function that generates a confirmation email: a feedback form, an appointment form, and so on. Cybercriminals use automated systems to exploit such forms for spamming or phishing.

The mechanics are straightforward: the target’s address is entered into the form as a contact email, while the text of the fraudulent email itself goes in the Name or Subject field, for example, “Your money transfer is ready for issue (link)”. As a result, the victim receives a malicious email that reads something like: “Dear XXX, your money transfer is ready for issue (link). Thank you for contacting us. We’ll be in touch shortly”. Naturally, the anti-spam platforms eventually stop letting such emails through, and the victim company’s form loses some of its functionality. In addition, all recipients of such mail think less of the company, equating it with a spammer.

How to protect PR and marketing assets from cyberattacks

Since the described attacks are quite diverse, in-depth protection is called for. Here are the steps to take:

  • Conduct cybersecurity awareness training across the entire marketing department. Repeat it regularly;
  • Make sure that all employees adhere to password best practices: long, unique passwords for each platform and mandatory use of two-factor authentication — especially for social networks, mailing tools, and ad management platforms;
  • Eliminate the practice of using one password for all employees who need access to a corporate social network or other online tool;
  • Instruct employees to access mailing/advertising tools and the website admin panel only from work devices equipped with full protection in line with company standards (EDR or internet security, EMM/UEM, VPN);
  • Urge employees to install comprehensive protection on their personal computers and smartphones;
  • Introduce the practice of mandatory logout from mailing/advertising platforms and other similar accounts when not in use;
  • Remember to revoke access to social networks, mailing/advertising platforms, and website admin immediately after an employee departs the company;
  • Regularly review email lists sent out and ads currently running, together with detailed website traffic analytics so as to spot anomalies in good time;
  • Make sure that all software used on your websites (content management system, its extensions) and on work computers (such as OS, browser, and Office), is regularly and systematically updated to the very latest versions;
  • Work with your website support contractor to implement form validation and sanitization; in particular, to ensure that links can’t be inserted into fields that aren’t intended for such a purpose. Also set a “rate limit” to prevent the same actor from making hundreds of requests a day, plus a smart captcha to guard against bots.

 

]]>
full large medium thumbnail
One-time passwords and 2FA codes — what to do if you receive one without requesting it | Kaspersky official blog https://www.kaspersky.com/blog/unexpected-login-codes-otp-2fa/50526/ Thu, 08 Feb 2024 12:42:25 +0000 https://www.kaspersky.com/blog/?p=50526 Over the past few years, we’ve become accustomed to logging into important websites and apps, such as online banking ones, using both a password and one other verification method. This could be a one-time password (OTP) sent via a text message, email or push notification; a code from an authenticator app; or even a special USB device (“token”). This method of logging in is called two-factor authentication (2FA), and it makes hacking much more difficult: stealing or guessing a password alone is no longer sufficient to hijack an account. But what should you do if you haven’t tried to log in anywhere yet suddenly receive a one-time code or a request to enter it?

There are three reasons why this situation might occur:

  1. A hacking attempt. Hackers have somehow learned, guessed, or stolen your password and are now trying to use it to access your account. You’ve received a legitimate message from the service they are trying to access.
  2. Preparation for a hack. Hackers have either learned your password or are trying to trick you into revealing it, in which case the OTP message is a form of phishing. The message is fake, although it may look very similar to a genuine one.
  3. Just a mistake. Sometimes online services are set up to first request a confirmation code from a text message, and then a password, or authenticate with just one code. In this case, another user could have made a typo and entered your phone/email instead of theirs — and you receive the code.

As you can see, there may be a malicious intent behind this message. But the good news is that at this stage, there has been no irreparable damage, and by taking the right action you can avoid any trouble.

What to do when you receive a code request

Most importantly, don’t click the confirmation button if the message is in the “Yes/No” form, don’t log in anywhere, and don’t share any received codes with anyone.

If the code request message contains links, don’t follow them.

These are the most essential rules to follow. As long as you don’t confirm your login, your account is safe. However, it’s highly likely that your account’s password is known to attackers. Therefore, the next thing to do is change the password for this account. Go to the relevant service by entering its web address manually — not by following a link. Enter your password, get a new (this is important!) confirmation code, and enter it. Then find the password settings and set a new, strong password. If you use the same password for other accounts, you’d need to change the password for them, too — but make sure to create a unique password for each account. We understand that it’s difficult to remember so many passwords, so we highly recommend storing them in a dedicated password manager.

This stage — changing your passwords — is not so urgent. There’s no need to do it in a rush, but also don’t postpone it. For valuable accounts (like banking), attackers may try to intercept the OTP if it’s sent via text. This is done through SIM swapping (registering a new SIM card to your number) or launching an attack via the operator’s service network utilizing a flaw in the SS7 communications protocol. Therefore, it’s important to change the password before the bad guys attempt such an attack. In general, one-time codes sent by text are less reliable than authenticator apps and USB tokens. We recommend always using the most secure 2FA method available; a review of different two-factor authentication methods can be found here.

What to do if you’re receiving a lot of OTP requests

In an attempt to make you confirm a login, hackers may bombard you with codes. They try to log in to the account again and again, hoping that you’ll either make a mistake and click “Confirm”, or go to the service and disable 2FA out of annoyance. It’s important to keep cool and do neither. The best thing to do is go to the service’s site as described above (open the site manually, not through a link) and quickly change the password; but for this, you’ll need to receive and enter your own, legitimate OTP. Some authentication requests (for example, warnings about logging into Google services) have a separate “No, it’s not me” button — usually, this button causes automated systems on the service side to automatically block the attacker and any new 2FA requests. Another option, albeit not the most convenient one, would be to switch the phone to silent or even airplane mode for half-an-hour or so until the wave of codes subsides.

What to do if you accidentally confirm a stranger’s login

This is the worst-case scenario, as you’ve likely allowed an attacker into your account. Attackers act quickly in changing settings and passwords, so you’ll have to play catch-up and deal with the consequences of the hack. We’ve provided advice for this scenario here.

How to protect yourself?

The best method of defense in this case is to stay one step ahead of the criminals: si vis pacem, para bellum. This is where our security solution comes in handy. It tracks leaks of your accounts linked to both email addresses and phone numbers, including on the dark web. You can add the phone numbers and email addresses of all your family members, and if any account data becomes public or is discovered in leaked databases, Kaspersky Premium will alert you and give advice on what to do.

Included in the subscription, Kaspersky Password Manager will warn you about compromised passwords and help you change them, generating new uncrackable passwords for you. You can also add two-factor authentication tokens to it or easily transfer them from Google Authenticator in a few clicks. Secure storage for your personal documents will safeguard your most important documents and files, such as passport scans or personal photos, in encrypted form so that only you can access them.

Moreover, your logins, passwords, authentication codes and saved documents will be available from any of your devices — computer, smartphone or tablet — so even if you somehow lose your phone, you’ll lose neither your data nor access, and you’ll be able to easily restore them on a new device. And to access all your data, you only need to remember one password — the main one — which isn’t stored anywhere except in your head and is used for banking-standard AES data encryption.

With the “zero disclosure principle”, no one can access your passwords or data — not even Kaspersky employees. The reliability and effectiveness of our security solutions have been confirmed by numerous independent tests, with one recent example being our home protection solutions having received the highest award — Product of the Year 2023 — in tests run by the independent European laboratory AV-Comparatives.

]]>
full large medium thumbnail
Can TVs, smartphones, and smart assistants eavesdrop on your conversations? | Kaspersky official blog https://www.kaspersky.com/blog/smart-speaker-tv-smartphone-eavesdropping/50236/ Tue, 16 Jan 2024 08:57:01 +0000 https://www.kaspersky.com/blog/?p=50236 Rumors of eavesdropping smart devices have been circulating for many years. Doubtless, you’ve heard a tale or two about how someone was discussing, say, the new coffee machine at work, and then got bombarded with online ads for, yes, coffee machines. We’ve already tested this hypothesis, and concluded that advertisers aren’t eavesdropping — they have many other less dramatic but far more effective ways of targeting ads. But perhaps the times are changing? News broke recently (here and here) about two marketing firms allegedly bragging about offering targeted ads based on just such eavesdropping. Granted, both companies later retracted their words and removed the relevant statements from their websites. Nevertheless, we decided to take a fresh look at the situation.

What the firms claimed

In calls with clients, podcasts, and blogs, CMG and Mindshift told much the same story — albeit devoid of any technical detail: smartphones and smart TVs allegedly help them recognize predetermined keywords in people’s conversations, which are then used to create custom audiences. These audiences, in the form of lists of phone numbers, email addresses, and anonymous advertising IDs, can be uploaded to various platforms (from YouTube and Facebook to Google AdWords and Microsoft Advertising) and leveraged to target ads at users.

If the second part about uploading custom audiences sounds quite plausible, the first is more than hazy. It’s not clear at all from the companies’ statements which apps and which technologies they use to collect information. But in the long (now deleted) blog post, the following non-technical passage stood out most of all: “We know what you’re thinking. Is this even legal? It is legal for phones and devices to listen to you. When a new app download or update prompts consumers with a multi-page term of use agreement somewhere in the fine print, Active Listening is often included.”

After being pestered by journalists, CMG removed the post from its blog and issued an apology/clarification, adding that there’s no eavesdropping involved, and the targeting data is “sourced by social media and other applications”.

The second company, Mindshift, just quietly erased all marketing messages about this form of advertising from its website.

When did they lie?

Clearly, the marketers “misspoke” either to their clients in promising voice-activated ads, or to the media Most likely it was the former; here’s why:

  • Modern operating systems indicate clearly when the microphone is in use by a legitimate app. And if, say, some weather app is constantly listening to the microphone, waiting for, say, the words “coffee machine” to come from your lips, the microphone icon will light up in the notification panel of all the most popular operating systems.
  • On smartphones and other mobile devices, continuous eavesdropping will drain the battery and eat up data. This will get noticed and cause a wave of hate.
  • Constantly analyzing audio streams from millions of users would require massive computing power and be financial folly — since advertising profits could never cover the costs of such a targeting operation.

Contrary to popular belief, the annual revenue of advertising platforms per user is quite small: less than $4 in Africa, around $10 on average worldwide, and up to $60 in the U.S. Given that these figures refer to income, not profit, there’s simply no money left for eavesdropping. Doubters are invited to study, for example, Google Cloud’s speech recognition pricing: even at the most discounted wholesale rate (two million+ minutes of audio recordings per month), converting speech to text costs 0.3 cents per minute. Assuming a minimum of three hours of speech recognition per day, the client would have to spend around $200 per year on each individual user — too much even for U.S. advertising firms.

What about voice assistants?

That said, the above reasoning may not hold true for devices that already listen to voice commands by nature of their primary purpose. First and foremost are smart speakers, as well as smartphones with voice assistants permanently on. Less obvious devices include smart TVs that also respond to voice commands.

According to Amazon, Alexa is always listening out for the wake word, but only records and sends voice data to the cloud upon hearing it, and stops as soon as interaction with the user is over. The company doesn’t deny that Alexa data is used for ad targeting, and independent studies confirm it. Some users consider such a practice to be illegal, but the lawsuit they filed against Amazon is still ongoing. Meanwhile, another action brought against Amazon by the U.S. Federal Communications Commission resulted in a modest $30 million settlement. The e-commerce giant was ordered to pay out for failing to delete children’s data collected by Alexa, in direct violation the U.S. Children’s Online Privacy Protection Act (COPPA). The company is also barred from using this illegally harvested data for business needs — in particular training algorithms.

And it’s long been an open secret that other voice assistant vendors also collect user interaction data: here’s the lowdown on Apple and Google. Now and then, these recordings are listened to by living people — to solve technical issues, train new algorithms, and so on. But are they used to target ads? Some studies confirm such practices on the part of Google and Amazon, although it’s more a case of using voice search or purchase history rather than constant eavesdropping. As for Apple, there was no link between ads and Siri in any study.

We did not find a study devoted to smart TV voice commands, but it has long been known that smart TVs collect detailed information about what users watch — including video data from external sources (Blue-ray Disc player, computer, and so on). It can’t be ruled out that voice interactions with the built-in assistant are also used more extensively than one might like.

Special case: spyware

True smartphone eavesdropping also occurs, of course, but here it’s not about mass surveillance for advertising purposes but targeted spying on a specific victim. There are many documented cases of such surveillance — the perpetrators of which can be jealous spouses, business competitors, and even bona fide intelligence agencies. But such eavesdropping requires malware to be installed on the victim’s smartphone — and often, “thanks” to vulnerabilities, this can happen without any action whatsoever on the part of the target. Once a smartphone is infected, the attacker’s options are virtually limitless. We have a string of posts dedicated to such cases: read about stalkerware, infected messenger mods, and, of course, the epic saga of our discovery of Triangulation, perhaps the most sophisticated Trojan for Apple devices there has ever been. In the face of such threats, caution alone won’t suffice — targeted measures are needed to keep your smartphone safe, which include installing a reliable protection solution.

How to guard against eavesdropping

  • Disable microphone permission on smartphones and tablets for all apps that don’t need it. In modern versions of mobile operating systems, in the same place under permissions and privacy management, you can see which apps used your phone’s microphone (and other sensors) and when. Make sure there’s nothing suspicious or unexpected in this list.
  • Control which apps have access to the microphone on your computer — the permission settings in the latest versions of Windows and macOS are much the same as on smartphones. And install reliable protection on your computer to prevent snooping through malware.
  • Consider turning off the voice assistant. Although it doesn’t listen in continuously, some unwanted snippets may end up in the recordings of your conversations with it. If you’re worried that the voices of your friends, family, or coworkers might get onto the servers of global corporations, use keyboards, mice, and touchscreens instead.
  • Turn off voice control on your TV. To make it easier to input names, connect a compact wireless keyboard to your smart TV.
  • Kiss smart speakers goodbye. For those who like to play music through speakers while checking recipes and chopping vegetables, this is the hardest tip to follow. But a smart speaker is pretty much the only gadget capable of eavesdropping on you that really does it all the time. So, you either have to live with that fact — or power them up only when you’re chopping vegetables.
]]>
full large medium thumbnail
Cloud SSO implementations, and how to reduce attack risks https://www.kaspersky.com/blog/key-issues-in-sso-implementation/50243/ Mon, 15 Jan 2024 19:46:34 +0000 https://www.kaspersky.com/blog/?p=50243 Credentials leaks are still among attackers’ most-used penetration techniques. In 2023 Kaspersky Digital Footprint Intelligence experts found on the darknet more than 3100 ads offering access to corporate resources – some of them owned by Fortune 500 companies. To more effectively manage associated risks, minimize the number of vulnerable accounts, and detect and block unauthorized access attempts quicker, companies are adopting identity management systems, which we covered in detail previously. However, an effective identity management process isn’t feasible until most corporate systems support unified authentication. Internal systems usually depend on a centralized catalog – such as Active Directory – for unified authentication, whereas external SaaS systems talk to the corporate identity catalog via a single sign-on (SSO) platform, which can be located externally or hosted in the company’s infrastructure (such as ADFS).

For employees, it makes the log-in process as user-friendly as it gets. To sign in to an external system – such as Salesforce or Concur – the employee completes the standard authentication procedure, which includes entering a password and submitting a second authentication factor: a one-time password, USB token, or something else – depending on the company’s policy. No other logins or passwords are needed. Moreover, after you sign in to one of the systems in the morning, you’ll be authenticated in the others by default. In theory the process is secure, as the IT and infosec teams have full centralized control over accounts, password policies, MFA methods, and logs. In real life however, the standard of security implemented by external systems that support SSO may prove not so high.

SSO pitfalls

When the user signs in to a software-as-a-service (SaaS) system, the system server, the user’s client device, and the SSO platform go through a series of handshakes as the platform validates the user and issues the SaaS and the device with authentication tokens that confirm the user’s permissions. The token can get a range of attributes from the platform that have a bearing on security. These may include the following:

  • Token (and session) expiration, which requires the user to get authenticated again
  • Reference to a specific browser or mobile device
  • Specific IP addresses or IP range limits, which enable things like geographic restrictions
  • Extra conditions for session expiration, such as closing the browser or signing out of the SSO platform

The main challenge is that some cloud providers misinterpret or even ignore these restrictions, thus undermining the security model built by the infosec team. On top of that, some SaaS platforms have inadequate token validity controls, which leaves room for forgery.

How SSO implementation flaws are exploited by malicious actors

The most common scenario is some form of a token theft. This can be stealing cookies from the user’s computer, intercepting traffic, or capturing HAR files (traffic archives). The same token being used on a different device and from a different IP address is generally an urgent-enough signal for the SaaS platform that calls for revalidation and possibly, reauthentication. In the real world though, malicious actors often successfully use stolen tokens to sign in to the system on behalf of the legitimate user, while circumventing passwords, one-time codes, and other infosec protections.

Another frequent scenario is targeted phishing that relies on fake corporate websites and, if required, a reverse proxy like evilginx2, which steals passwords, MFA codes, and tokens too.

Improving SSO security

Examine your SaaS vendors. The infosec team can add SSO implementation of the SaaS provider to the list of questions that vendors are required to respond to when submitting their proposals. In particular, these are questions about observing various token restrictions, validation, expiration, and revocation. Further examination steps can include application code audits, integration testing, vulnerability analysis, and pentesting.

Plan compensatory measures. There’s a variety of methods to prevent token manipulation and theft. For example, the use of EDR on all computers significantly lowers the risk of being infected with malware, or redirected to a phishing site. Management of mobile devices (EMM/UEM) can sort out mobile access to corporate resources. In certain cases, we recommend barring unmanaged devices from corporate services.

Configure your traffic analysis and identity management systems to look at SSO requests and responses, so that they can identify suspicious requests that originate from unusual client applications or non-typical users, in unexpected IP address zones, and so on. Tokens that have excessively long lifetimes can be addressed with traffic control as well.

Insist on better SSO implementation. Many SaaS providers view SSO as a customer amenity, and a reason for offering a more expensive “enterprise” plan, whereas information security takes a back seat. You can partner with your procurement team to get some leverage over this, but things will change rather slowly. While talking to SaaS providers, it’s never a bad idea to ask about their plans for upgrading the SSO feature – such as support for the token restrictions mentioned above (geoblocking, expiration, and so on), or any plans to transition to using newer, better-standardized token exchange protocols – such as JWT or CAEP.

]]>
full large medium thumbnail
Resolutions for a cybersecure 2024 | Kaspersky official blog https://www.kaspersky.com/blog/cybersecurity-resolutions-2024/50177/ Fri, 05 Jan 2024 14:55:48 +0000 https://www.kaspersky.com/blog/?p=50177 The rapid development of AI, international tensions, and the proliferation of “smart” technologies like the internet of things (IoT) make the upcoming year particularly challenging in terms of cybersecurity. Each of us will face these challenges in one way or another, so, as per tradition, we’re here to help all our readers make a few New Year’s resolutions for a more secure 2024.

Protect your finances

E-commerce and financial technologies continue to expand globally, and successful technologies are being adopted in new regions. Instant electronic payments between individuals have become much more widespread. And, of course, criminals are devising new ways to swindle you out of your money. This involves not only fraud using instant money-transfer systems, but also advanced techniques for stealing payment data on e-commerce sites and online stores. The latest generations of web skimmers installed by hackers on legitimate online shopping sites are almost impossible to perceive, and victims only learn that their data has been stolen when an unauthorized charge appears on their card.

What to do?

  • Link your bank cards to Apple Pay, Google Pay, or other similar payment systems available in your country. This is not only convenient, but also reduces the likelihood of data theft when making purchases in stores.
  • Use such systems to make payments on websites whenever possible. There’s no need to enter your bank card details afresh on every new website.
  • Protect your smartphones and computers with a comprehensive security system like Kaspersky Premium. This will help protect your money, for example, from a nasty new attack in which the recipient’s details are replaced at the moment of making an instant money transfer in a banking app.
  • Use virtual or one-time cards for online payments if your bank supports this option. If a virtual card can be quickly reissued in the app, change it regularly — for example, once a month. Or use special services to ‘mask’ cards, generating one-time payment details for each payment session. There are many of these for different countries and payment systems.

Don’t believe everything you see

Generative artificial intelligence has dominated the news throughout 2023 and has already significantly affected the job market. Unfortunately, it’s also been used for malicious purposes. Now, just about anyone can create fake texts, photos, and videos in a matter of minutes — a labor that previously required a lot of time and skill. This has already had a noticeable impact on at least two areas of cybersecurity.

First, the appearance of fake images, audio, and video on news channels and social media. In 2023, generated images were used for propaganda purposes during geopolitical conflicts in post-Soviet countries and the Middle East. They were also used successfully by fraudsters for various instances of fake fundraising. Moreover, towards the end of the year, our experts discovered massive “investment” campaigns in which the use of deepfakes reached a whole new level: now we’re seeing news reports and articles on popular channels about famous businessmen and heads of state encouraging users to invest in certain projects — all fake, of course.

Second, AI has made it much easier to generate phishing emails, social media posts, and fraudulent websites. For many years, such scams could be identified by sloppy language and numerous typos, because the scammers didn’t have the time to write and proofread them properly. But now, with WormGPT and other language models optimized for hackers, attackers can create far more convincing and varied bait on an industrial scale. What’s more, experts fear that scammers will start using these same multilingual AI models to create convincing phishing material in languages and regions that have rarely been targeted for such purposes before.

What to do?

  • Be highly critical of any emotionally provocative content you encounter on social media — especially from people you don’t know personally. Make it a habit to always verify the facts on reputable news channels and expert websites.
  • Don’t transfer money to any kind of charity fundraiser or campaign without conducting a thorough background check of the recipient first. Remember, generating heart-breaking stories and images is literally as easy as pushing a button these days.
  • Install phishing and scam protection on all your devices, and enable all options that check links, websites, emails, and attachments. This will reduce the risk of clicking on phishing links or visiting fraudulent websites.
  • Activate banner ad protection — both Kaspersky Plus and Kaspersky Premium have this feature, as do a number of browsers. Malicious advertising is another trend for 2023-2024.

Some experts anticipate the emergence of AI-generated content analysis and labeling systems in 2024. However, don’t expect them to be implemented quickly or universally, or be completely reliable. Even if such solutions do emerge, always double-check any information with trusted sources.

Don’t believe everything you hear

High-quality AI-based voice deepfakes are already being actively used in fraudulent schemes. Someone claiming to be your “boss”, “family member”, “colleague”, or some other person with a familiar voice might call asking for urgent help — or to help someone else who’ll soon reach out to you. Such schemes mainly aim to trick victims into voluntarily sending money to criminals. More complex scenarios are also possible — for example, targeting company employees to obtain passwords for accessing the corporate network.

What to do?

  • Verify any unexpected or alarming calls without panic. If someone you supposedly know well calls, ask a question only that person can answer. If a colleague calls but their request seems odd — for example, asking you to send or spell a password, send a payment, or do something else unusual — reach out to other colleagues or superiors to double-check things.
  • Use caller identifier apps to block spam and scam calls. Some of these apps work not only with regular phone calls but also with calls through messengers like WhatsApp.

Buy only safe internet-of-things (IoT) smart devices

Poorly protected IoT devices create a whole range of problems for their owners: robot vacuum cleaners spy on their owners, smart pet feeders can give your pet an unplanned feast or a severe hunger strike, set-top boxes steal accounts and create rogue proxies on your home network, and baby monitors and home security cameras turn your home into a reality TV show without your knowledge.

What could improve in 2024? The emergence of regulatory requirements for IoT device manufacturers. For example, the UK will ban the sale of devices with default logins and passwords like “admin/admin”, and require manufacturers to disclose in advance how long a particular device will receive firmware updates. In the U.S., a security labeling system is being developed that will make it possible to understand what to expect from a “smart” device in terms of security even before purchase.

What to do?

  • Find out if there are similar initiatives in your country and make the most of them by purchasing only secure IoT devices with a long period of declared support. It’s likely that once manufacturers are obliged to ensure the security of smart devices locally, they’ll make corresponding changes to products for the global market. Then you’ll be able to choose a suitable product by checking, for example, the American “security label”, and buy it — even if you’re not in the U.S.
  • Carefully configure all smart devices using our detailed advice on creating a smart home and setting up its security.

Take care of your loved ones

Scams involving fake texts, images, and voices messages can be highly effective when used on elderly people, children, or those less interested in technology. Think about your family, friends, and colleagues — if any of them may end up a victim of any the schemes described above, take the time to tell them about them or provide a link to our blog.

What to do?

Before we say goodbye and wish you a happy and peaceful 2024, one final little whisper — last year’s New Year’s resolutions are still very relevant: the transition to password-less systems is progressing at a swift pace, so going password-free in the New Year might be a good idea, while basic cyber hygiene has become all the more crucial. Oops; nearly forgot: wishing you a happy and peaceful 2024!…

]]>
full large medium thumbnail
Can you trust Windows Hello biometric authentication | Kaspersky official blog https://www.kaspersky.com/blog/securing-biometrics-windows-hello/50094/ Wed, 20 Dec 2023 17:45:27 +0000 https://www.kaspersky.com/blog/?p=50094 Due to mass password leaks, user forgetfulness, and other problematic areas of modern information security, alternative ways of logging in to systems and corporate software are gaining ground. Besides the familiar authenticator apps and various contactless cards and USB tokens, fingerprint-based biometric authentication is a popular choice — especially since laptop keyboards these days often come with built-in scanners.

This method does seem rather reliable at first glance; however, a recent report by Blackwing Intelligence casts doubt upon this assertion. The authors managed to hack the biometric authentication system and log in to Windows using Windows Hello on Dell Inspiron 15 and Lenovo ThinkPad T14 laptops, as well as using the Microsoft Surface Pro Type Cover with Fingerprint ID keyboard for Surface Pro 8 and Surface Pro X tablets. Let’s have a look at their findings to see whether you should update your cyberdefense strategy.

Anatomy of the hack

First of all, we must note that this was a hardware hack. The researchers had to partially disassemble all three devices, disconnect the sensors from the internal USB bus, and connect them to external USB ports through a Raspberry PI 4 device that carried out a man-in-the-middle attack. The attack exploits the fact that all chips certified for Windows Hello must store the fingerprint database independently, in the on-chip memory. No fingerprints are ever transmitted to the computer itself — only cryptographically signed verdicts such as “User X successfully passed verification”. In addition, the protocol and the chips themselves support storing multiple fingerprints for different users.

The researchers were able to perform the spoofing, although attacks varied for different laptop models. They uploaded onto the chip additional fingerprints, supposedly for a new user, but were able to modify the data exchange with the computer so that information about the successful verification of the new user would be associated with the ID of the old one.

The main reason the spoofing worked was that all verified devices deviate to some degree from the Secure Device Connection Protocol (SDCP), which Microsoft developed specifically to head off such attacks. The protocol takes account of many common attack scenarios — from data spoofing to replaying a data exchange between the operating system and the chip when the user is not at the computer. Hacking the implementation of the security system on a Dell (Goodix fingerprint scanner) proved possible due to the fact that the Linux driver doesn’t support SDCP, the chip stores two separate databases for Windows and Linux, and information about the choice of database is transmitted without encryption. Lenovo (Synaptics chip) uses its own encryption instead of SDCP, and the authors managed to figure out the key generation mechanism and decrypt the exchange protocol. Rather jaw-droppingly, the Microsoft keyboard (ELAN chip) doesn’t use SDCP at all, and the standard Microsoft encryption is simply absent.

Main takeaways

Hardware hacks are difficult to prevent, yet equally if not more difficult to carry out. This case isn’t about simply inserting a USB flash drive into a computer for a minute; skill and care are required to assemble and disassemble the target laptop, and throughout the period of unauthorized access the modifications to the computer are obvious. In other words, the attack cannot be carried out unnoticed, and it’s not possible to return the device to the rightful user before the hack is complete and the machine is restored to its original form. As such, primarily at risk are the computers of company employees with high privileges or access to valuable information, and also of those who often work remotely.

To mitigate the risk to these user groups:

  • Don’t make biometrics the only authentication factor. Complement it with a password, authenticator app, or USB token. If necessary, you can combine these authentication factors in different ways. A user-friendly policy might require a password and biometrics at the start of work (after waking up from sleep mode or initial booting), and then only biometrics during the working day;
  • Use external biometric scanners that have undergone an in-depth security audit;
  • Implement physical security measures to prevent laptops from being opened or removed from designated locations;
  • Combine all of the above with full-disk encryption and the latest versions of UEFI with secure boot functions activated.

Lastly, remember that, although biometric scanners aren’t perfect, hacking them is far more difficult than extracting passwords from employees. So even if biometrics aren’t not the optimal solution for your company, there’s no reason to restrict yourself to just passwords.

]]>
full large medium thumbnail
How to stop, disable, and remove any Android apps — even system ones | Kaspersky official blog https://www.kaspersky.com/blog/how-to-disable-and-remove-android-bloatware/49960/ Fri, 01 Dec 2023 14:34:27 +0000 https://www.kaspersky.com/blog/?p=49960 Most smartphones have an average of around 80 installed apps, of which at least 30% are never used since most are forgotten about. But such “ballast” is harmful: there’s less free space on the device; potential bugs and compatibility issues multiply; and even unused apps at times distract you with pointless alerts.

To make things worse, abandoned apps can continue collecting data about the phone and its owner and feed it to advertising firms, or simply gobble up mobile data. Hopefully, we’ve already convinced you to “debloat” your smartphone at least a couple of times a year and uninstall apps you haven’t used for ages — not forgetting to cancel any paid subscriptions to them!

But, unfortunately, some apps are vendor-protected against uninstallation, and so aren’t all that easy to jettison. Thankfully, there are some ways to get round this problem…

Uninstall the app

Sometimes you can’t find an unwanted app under the Manage apps & device tab of the Google Play app. First, try to remove it through the phone settings: look there for the Apps section. This lists all installed programs and has a search feature to save you from having to scroll through them all. Having found the unwanted app and tapping it, you’re taken to the App Info screen. Here you can view the app’s mobile data, battery, and storage consumption, and, most importantly, find and tap the Uninstall button. If the button is there and active, the job’s done.

List of all installed apps and the App Info screen with the Uninstall button

List of all installed apps and the App Info screen with the Uninstall button

Disable the app

If the app was installed on the phone by the vendor, it’s likely to be non-removable and have no Uninstall button on the App Info screen. That said, it’s not necessarily linked to the OS or core components of the smartphone — it could be, say, a Facebook client or a proprietary browser. Such apps are often called bloatware since they bloat the phone’s firmware and the list of standard apps. The easiest way to disable such apps is on the above-mentioned App Info screen; instead of Uninstall, the relevant button will be marked Disable. A disabled app is not much different from an uninstalled one — it vanishes from the set of icons on the startup screen and won’t run manually or when the phone boots up. Should you need it later, you can easily turn it back on with a single tap on that same App Info screen.

Disabling reduces the risk of data leakage, but does nothing to save storage space — unfortunately, the disabled app continues to take up memory on your phone. If you absolutely have to uninstall it — but there’s no Uninstall button — read on!…

For non-removable apps, instead of an Uninstall button, the App Info screen shows a Disable button

For non-removable apps, instead of an Uninstall button, the App Info screen shows a Disable button

Stop the app

But what if the Disable button on the App Info screen is grayed out and untappable? For especially important programs, vendors take care to block the disabling option — often for a good reason (they’re vital to the system) — so you need to think very carefully before trying to disable or uninstall such apps manually. Open your favorite search engine and punch in the query “exact smartphone model number + exact app name”. Most likely you’ll see Android user forum discussions at the top of the search results. These often give information about whether the given app is safe to disable or whether there could be any side effects.

To perform a harmless experiment with an app that can’t be disabled, you can use the Force Stop button. This is the second button on that App Info screen and it’s almost always active — even for apps that can’t be disabled. Force Stop simply stops the app temporarily, without attempting to remove or permanently disable it. However, it no longer consumes power or mobile data — and can no longer spy on you. And if your phone continues to work as normal, then perhaps the app isn’t that important after all.

But stopped apps can start up again when certain events occur or after a phone restart, and stopping them manually each time — moreover regularly — can be troublesome and inconvenient. Fortunately, you can automate this task with the Greenify app. It doesn’t require superuser rights to work, but merely automates navigating to the now-familiar App Info screen and tapping the Force Stop button. You simply supply Greenify with a list of unwanted apps and set a Force Stop schedule to, say, twice a day. Other tools offer similar functionality, but Greenify’s advantage is its lack of “extra” features.

If the Disable button is inactive, try using Force Stop

If the Disable button is inactive, try using Force Stop

Freeze or uninstall the app despite its objections

If you tested stopping a non-removable app and suffered no negative effects, you might consider freezing it or removing it altogether. Freezing is the same as disabling but is done using different tools. Before delving into the details, note that freezing requires technical skill and the activation of Developer mode on your phone. This mode itself creates certain information security risks by allowing connections to the phone via USB or LAN in special technical modes, plus the ability to view and modify its contents. Although Google has fenced off this functionality with many safeguards (permission requests, additional passwords, and so on), the room for error (thus creating risks) is high.

One more thing: before you start tinkering, be sure to create the fullest possible backup of your smartphone data.

If all of the above hasn’t scared you off, see the guide in the box.

Freezing and uninstalling non-removable Android apps in Developer mode

  • Download and install Android SDK Platform-Tools on your computer. Of the tools inside, you’ll only need the Android Debug Bridge USB driver and the ADB command line.
  • Enable Developer mode on your phone. The details vary slightly from vendor to vendor, but the general recipe is roughly the same: repeatedly tap the Build Number option in the About Phone.
  • Enable USB Debugging under Developer Settings on your smartphone. There are multiple options there — but don’t touch any apart from these two!
  • Connect your smartphone to your computer through USB.
  • Allow Debug mode on your phone screen.
  • Test Debug mode by getting a list of all packages (what developers call apps) installed on your phone. To do so, type the following in the ADB command line
    adb shell pm list packages
    The response will be a long list of packages installed on the phone, in which you need to find the name of the unwanted app. This might look something like facebook.katana or com.samsung.android.bixby.agent. You can often (but not always) tell which app is which by their names.
  • Freeze (disable) the unwanted app using the ADB command line. To do so, enter the command
    adb shell pm disable-user --user 0 PACKAGENAME ,
    where PACKAGENAME is the name of the unwanted app package. Different vendors may have different usernames (0 in our example), so check the correct PM command for your smartphone. As before, an online search helps out: “phone model + Debloat” or “phone model + ADB PM”.
  • You can use developer commands to not only disable an app but also completely uninstall it. To do so, replace the previous command with adb shell pm uninstall --user 0 PACKAGENAME
  • Restart your phone.

The free Universal Android Debloater tool somewhat simplifies all this sorcery. It issues ADB commands automatically, based on the “cleaning packages” selected from the menu, which are prepared with both the vendor and model in mind. But since this is an open-source app written by enthusiasts, we can’t vouch for its efficacy.

]]>
full large medium thumbnail