Cybersecurity – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Wed, 28 Feb 2024 12:15:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Cybersecurity – Kaspersky official blog https://www.kaspersky.com/blog 32 32 VoltSchemer: attacks on wireless chargers through the power supply | Kaspersky official blog https://www.kaspersky.com/blog/voltschemer-attack-wireless-chargers/50710/ Wed, 28 Feb 2024 12:15:56 +0000 https://www.kaspersky.com/blog/?p=50710 A group of researchers from the University of Florida has published a study on a type of attack using Qi wireless chargers, which they’ve dubbed VoltSchemer. In the study, they describe in detail how these attacks work, what makes them possible, and what results they’ve achieved.

In this post, first we’ll discuss the researchers’ main findings. Then we’ll explore what it all means practically speaking — and whether you should be concerned about someone roasting your smartphone through a wireless charger.

The main idea behind the VoltSchemer attacks

The Qi standard has become the dominant one in its field: it’s supported by all the latest wireless chargers and smartphones capable of wireless charging. VoltSchemer attacks exploit two fundamental features of the Qi standard.

The first is the way the smartphone and wireless charger exchange information to coordinate the battery charging process: the Qi standard has a communication protocol that uses the only “thing” connecting the charger and the smartphone — a magnetic field — to transmit messages.

The second feature is the way that wireless chargers are intended for anyone to freely use. That is, any smartphone can be placed on any wireless charger without any kind of prior pairing, and the battery will start charging immediately. Thus, the Qi communication protocol involves no encryption — all commands are transmitted in plain text.

It is this lack of encryption that makes communication between charger and smartphone susceptible to man-in-the-middle attacks; that is, said communication can be intercepted and tampered with. That, coupled with the first feature (use of the magnetic field), means such tampering  is not even that hard to accomplish: to send malicious commands, attackers only need to be able to manipulate the magnetic field to mimic Qi-standard signals.

VoltSchemer attack: malicious power adapter

To illustrate the attack, the researchers created a malicious power adapter: an overlay on a regular wall USB socket. Source

And that’s exactly what the researchers did: they built a “malicious” power adapter disguised as a wall USB socket, which allowed them to create precisely tuned voltage noise. They were able to send their own commands to the wireless charger, as well as block Qi messages sent by the smartphone.

Thus, VoltSchemer attacks require no modifications to the wireless charger’s hardware or firmware. All that’s necessary is to place a malicious power source in a location suitable for luring unsuspecting victims.

Next, the researchers explored all the ways potential attackers could exploit this method. That is, they considered various possible attack vectors and tested their feasibility in practice.

VoltSchemer attack: general outline and attack vectors

VoltSchemer attacks don’t require any modifications to the wireless charger itself — a malicious power source is enough. Source

1. Silent commands to Siri and Google Assistant voice assistants

The first thing the researchers tested was the possibility of sending silent voice commands to the built-in voice assistant of the charging smartphone through the wireless charger. They copied this attack vector from their colleagues at Hong Kong Polytechnic University, who dubbed this attack Heartworm.

Heartworm attack: the general idea

The general idea of the Heartworm attack is to send silent commands to the smartphone’s voice assistant using a magnetic field. Source

The idea here is that the smartphone’s microphone converts sound into electrical vibrations. It’s therefore possible to generate these electrical vibrations in the microphone directly using electricity itself rather than actual sound. To prevent this from happening, microphone manufacturers use electromagnetic shielding — Faraday cages. However, there’s a key nuance here: although these shields are good at suppressing the electrical component, they can be penetrated by magnetic fields.

Smartphones that can charge wirelessly are typically equipped with a ferrite screen, which protects against magnetic fields. However, this screen is located right next to the induction coil, and so doesn’t cover the microphone. Thus, today’s smartphone microphones are quite vulnerable to attacks from devices capable of manipulating magnetic fields — such as wireless chargers.

Heartworm attack: lack of protection in today's smartphones

Microphones in today’s smartphones aren’t protected from magnetic field manipulation. Source

The creators of VoltSchemer expanded the already known Heartworm attack with the ability to affect the microphone of a charging smartphone using a “malicious” power source. The authors of the original attack used a specially modified wireless charger for this purpose.

2. Overheating a charging smartphone

Next, the researchers tested whether it’s possible to use the VoltSchemer attack to overheat a smartphone charging on the compromised charger. Normally, when the battery reaches the required charge level or the temperature rises to a threshold value, the smartphone sends a command to stop the charging process.

However, the researchers were able to use VoltSchemer to block these commands. Without receiving the command to stop, the compromised charger continues to supply energy to the smartphone, gradually heating it up — and the smartphone can’t do anything about it. For cases such as this, smartphones have emergency defense mechanisms to avoid overheating: first, the device closes applications, and if that doesn’t help it shuts down completely.

VoltSchemer attack: overheating the charging smartphone

Using the VoltSchemer attack, researchers were able to heat a smartphone on a wireless charger to a temperature of 178°F — approximately 81°C. Source

Thus, the researchers were able to heat a smartphone up to a temperature of 81°C (178°F), which is quite dangerous for the battery — and in certain circumstances could lead to its catching fire (which could of course lead to other things catching fire if the charging phone is left unattended).

3. “Frying” other stuff

Next, the researchers explored the possibility of “frying” various other devices and everyday items. Of course, under normal circumstances, a wireless charger shouldn’t activate unless it receives a command from the smartphone placed on it. However, with the VoltSchemer attack, such a command can be given at any time, as well as a command to not stop charging.

Now, take a guess what will happen to any items lying on the charger at that moment! Nothing good, that’s for sure. For example, the researchers were able to heat a paperclip to a temperature of 280°C (536°F) — enough to set fire to any attached documents. They also managed to fry to death a car key, a USB flash drive, an SSD drive, and RFID chips embedded in bank cards, office passes, travel cards, biometric passports and other such documents.

VoltSchemer attack: frying external objects and devices

Also using the VoltSchemer attack, researchers were able to disable car keys, a USB flash drive, an SSD drive, and several cards with RFID chips, as well as heat a paperclip to a temperature of 536°F — 280°C. Source

In total, the researchers examined nine different models of wireless chargers available in stores, and all of them were vulnerable to VoltSchemer attacks. As you might guess, the models with the highest power pose the greatest danger, as they have the most potential to cause serious damage and overheat smartphones.

Should you fear a VoltSchemer attack in real life?

Protecting against VoltSchemer attacks is fairly straightforward: simply avoid using public wireless chargers and don’t connect your own wireless charger to any suspicious USB ports or power adapters.

While VoltSchemer attacks are quite interesting and can have spectacular results, their real-world practicality is highly questionable. Firstly, such an attack is very difficult to organize. Secondly, it’s not exactly clear what the benefits to an attacker would be — unless they’re a pyromaniac, of course.

But what this research clearly demonstrates is how inherently dangerous wireless chargers can be — especially the more powerful models. So, if you’re not completely sure of the reliability and safety of a particular wireless charger, you’d be wise to avoid using it. While wireless charger hacking is unlikely, the danger of your smartphone randomly getting roasted due to a “rogue” charger that no longer responds to charging commands isn’t entirely absent.

]]>
full large medium thumbnail
How to run language models and other AI tools locally on your computer | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-ai-locally-and-securely/50576/ Fri, 16 Feb 2024 11:08:41 +0000 https://www.kaspersky.com/blog/?p=50576 Many people are already experimenting with generative neural networks and finding regular use for them, including at work. For example, ChatGPT and its analogs are regularly used by almost 60% of Americans (and not always with permission from management). However, all the data involved in such operations — both user prompts and model responses — are stored on servers of OpenAI, Google, and the rest. For tasks where such information leakage is unacceptable, you don’t need to abandon AI completely — you just need to invest a little effort (and perhaps money) to run the neural network locally on your own computer – even a laptop.

Cloud threats

The most popular AI assistants run on the cloud infrastructure of large companies. It’s efficient and fast, but your data processed by the model may be accessible to both the AI service provider and completely unrelated parties, as happened last year with ChatGPT.

Such incidents present varying levels of threat depending on what these AI assistants are used for. If you’re generating cute illustrations for some fairy tales you’ve written, or asking ChatGPT to create an itinerary for your upcoming weekend city break, it’s unlikely that a leak will lead to serious damage. However, if your conversation with a chatbot contains confidential info — personal data, passwords, or bank card numbers — a possible leak to the cloud is no longer acceptable. Thankfully, it’s relatively easy to prevent by pre-filtering the data — we’ve written a separate post about that.

However, in cases where either all the correspondence is confidential (for example, medical or financial information), or the reliability of pre-filtering is questionable (you need to process large volumes of data that no one will preview and filter), there’s only one solution: move the processing from the cloud to a local computer. Of course, running your own version of ChatGPT or Midjourney offline is unlikely to be successful, but other neural networks working locally provide comparable quality with less computational load.

What hardware do you need to run a neural network?

You’ve probably heard that working with neural networks requires super-powerful graphics cards, but in practice this isn’t always the case. Different AI models, depending on their specifics, may be demanding on such computer components as RAM, video memory, drive, and CPU (here, not only the processing speed is important, but also the processor’s support for certain vector instructions). The ability to load the model depends on the amount of RAM, and the size of the “context window” — that is, the memory of the previous conversation — depends on the amount of video memory. Typically, with a weak graphics card and CPU, generation occurs at a snail’s pace (one to two words per second for text models), so a computer with such a minimal setup is only appropriate for getting acquainted with a particular model and evaluating its basic suitability. For full-fledged everyday use, you’ll need to increase the RAM, upgrade the graphics card, or choose a faster AI model.

As a starting point, you can try working with computers that were considered relatively powerful back in 2017: processors no lower than Core i7 with support for AVX2 instructions, 16GB of RAM, and graphics cards with at least 4GB of memory. For Mac enthusiasts, models running on the Apple M1 chip and above will do, while the memory requirements are the same.

When choosing an AI model, you should first familiarize yourself with its system requirements. A search query like “model_name requirements” will help you assess whether it’s worth downloading this model given your available hardware. There are detailed studies available on the impact of memory size, CPU, and GPU on the performance of different models; for example, this one.

Good news for those who don’t have access to powerful hardware — there are simplified AI models that can perform practical tasks even on old hardware. Even if your graphics card is very basic and weak, it’s possible to run models and launch environments using only the CPU. Depending on your tasks, these can even work acceptably well.

GPU throughput tests

Examples of how various computer builds work with popular language models

Choosing an AI model and the magic of quantization

A wide range of language models are available today, but many of them have limited practical applications. Nevertheless, there are easy-to-use and publicly available AI tools that are well-suited for specific tasks, be they generating text (for example, Mistral 7B), or creating code snippets (for example, Code Llama 13B). Therefore, when selecting a model, narrow down the choice to a few suitable candidates, and then make sure that your computer has the necessary resources to run them.

In any neural network, most of the memory strain is courtesy of weights — numerical coefficients describing the operation of each neuron in the network. Initially, when training the model, the weights are computed and stored as high-precision fractional numbers. However, it turns out that rounding the weights in the trained model allows the AI tool to be run on regular computers while only slightly decreasing the performance. This rounding process is called quantization, and with its help the model’s size can be reduced considerably — instead of 16 bits, each weight might use eight, four, or even two bits.

According to current research, a larger model with more parameters and quantization can sometimes give better results than a model with precise weight storage but fewer parameters.

Armed with this knowledge, you’re now ready to explore the treasure trove of open-source language models, namely the top Open LLM leaderboard. In this list, AI tools are sorted by several generation quality metrics, and filters make it easy to exclude models that are too large, too small, or too accurate.

List of language models sorted by filter set

List of language models sorted by filter set

After reading the model description and making sure it’s potentially a fit for your needs, test its performance in the cloud using Hugging Face or Google Colab services. This way, you can avoid downloading models which produce unsatisfactory results, saving you time. Once you’re satisfied with the initial test of the model, it’s time to see how it works locally!

Required software

Most of the open-source models are published on Hugging Face, but simply downloading them to your computer isn’t enough. To run them, you have to install specialized software, such as LLaMA.cpp, or — even easier — its “wrapper”, LM Studio. The latter allows you to select your desired model directly from the application, download it, and run it in a dialog box.

Another “out-of-the-box” way to use a chatbot locally is GPT4All. Here, the choice is limited to about a dozen language models, but most of them will run even on a computer with just 8GB of memory and a basic graphics card.

If generation is too slow, then you may need a model with coarser quantization (two bits instead of four). If generation is interrupted or execution errors occur, the problem is often insufficient memory — it’s worth looking for a model with fewer parameters or, again, with coarser quantization.

Many models on Hugging Face have already been quantized to varying degrees of precision, but if no one has quantized the model you want with the desired precision, you can do it yourself using GPTQ.

This week, another promising tool was released to public beta: Chat With RTX from NVIDIA. The manufacturer of the most sought-after AI chips has released a local chatbot capable of summarizing the content of YouTube videos, processing sets of documents, and much more — provided the user has a Windows PC with 16GB of memory and an NVIDIA RTX 30th or 40th series graphics card with 8GB or more of video memory. “Under the hood” are the same varieties of Mistral and Llama 2 from Hugging Face. Of course, powerful graphics cards can improve generation performance, but according to the feedback from the first testers, the existing beta is quite cumbersome (about 40GB) and difficult to install. However, NVIDIA’s Chat With RTX could become a very useful local AI assistant in the future.

The code for the game "Snake", written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The code for the game “Snake”, written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The applications listed above perform all computations locally, don’t send data to servers, and can run offline so you can safely share confidential information with them. However, to fully protect yourself against leaks, you need to ensure not only the security of the language model but also that of your computer – and that’s where our comprehensive security solution comes in. As confirmed in independent tests, Kaspersky Premium has practically no impact on your computer’s performance — an important advantage when working with local AI models.

]]>
full large medium thumbnail
Secure AI usage both at home and at work | Kaspersky official blog https://www.kaspersky.com/blog/how-to-use-chatgpt-ai-assistants-securely-2024/50562/ Wed, 14 Feb 2024 11:44:17 +0000 https://www.kaspersky.com/blog/?p=50562 Last year’s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio — AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account/device hacking through various AI tools — let alone create proper safeguards against a futuristic “evil AI”. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you’ll have to pick up a few skills and help yourself.

So, how do you use AI without regretting it later?

Filter important data

The privacy policy of OpenAI, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product “improvements”.

Most other popular language models — be it Google’s Gemini, Anthropic’s Claude, or Microsoft’s Bing and Copilot — have similar policies: they can all save dialogs in their entirety.

That said, inadvertent chat leaks have already occurred due to software bugs, with users seeing other people’s conversations instead of their own. The use of this data for training could also lead to a data leak from a pre-trained model: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (one, two, three) aimed at stealing dialogs, and they’re unlikely to stop there.

So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.

Don’t send any personal data to a chatbot. No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or “REDACTED” in your request.

Don’t upload any documents. Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team’s payroll. Or, worse than that, when processing documents received from external sources, you might be targeted with an attack that counts on the document being scanned by a language model.

Use privacy settings. Carefully review your large-language-model (LLM) vendor’s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

Sending code? Clean up any confidential data. This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.

Limit the use of third-party applications and plug-ins

Follow the above tips every time — no matter what popular AI assistant you’re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Gemini extensions, or separate add-on applications gives rise to new types of threats.

First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.

Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this attack that creates new GitHub repositories on behalf of the user.

Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high — all the more so because it seems no one really checks the creators or their contacts.

How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren’t smooth enough, and the creators themselves don’t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.

Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and “system prompt” templates that customize the assistant for a specific task (“Act as a high-school physics teacher…”). These wrappers certainly aren’t worth trusting with your data, as you can accomplish the task just fine without them.

If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.

  • Choose extensions and add-ons that have been around for at least several months and are being updated regularly.
  • Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.
  • If the plug-in comes with a privacy policy, read it carefully before you start using the extension.
  • Opt for open-source tools.
  • If you possess even rudimentary coding skills — or coder friends — skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.

Execution plug-ins call for special monitoring

So far, we’ve been discussing risks relating to data leaks; but this isn’t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user’s command — such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions — for example, by buying tickets with the victim’s money. This type of attack is referred to as prompt injection, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it — and perhaps never will.

Luckily, most significant actions — especially those involving payment transactions such as purchasing tickets — require a double confirmation. However, interactions between language models and plug-ins create an attack surface so large that it’s difficult to guarantee consistent results from these measures.

Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.

]]>
full large medium thumbnail
Crypto wallet drainer: what it is and how to defend against it | Kaspersky official blog https://www.kaspersky.com/blog/what-is-a-crypto-wallet-drainer/50490/ Tue, 06 Feb 2024 15:36:03 +0000 https://www.kaspersky.com/blog/?p=50490 A new category of malicious tools has been gaining popularity with crypto scammers lately: crypto wallet drainers. This post will explain what crypto drainers are, how they work, what makes them dangerous — even for experienced users — and how to defend against them.

What a crypto (wallet) drainer is

A crypto drainer — or crypto wallet drainer — is a type of malware that’s been targeting crypto owners since it first appeared just over a year ago. A crypto drainer is designed to (quickly) empty crypto wallets automatically by siphoning off either all or just the most valuable assets they contain, and placing them into the drainer operators’ wallets.

As an example of this kind of theft, let us review the theft of 14 Bored Ape NFTs with a total value of over $1 million, which occurred on December 17, 2022. The scammers set up a fake website for the real Los Angeles-based movie studio Forte Pictures, and contacted a certain NFT collector on behalf of the company. They told the collector that they were making a film about NFT. Next, they asked the collector if they wanted to license the intellectual property (IP) rights to one of their Bored Ape NFTs so it could be used in the movie.

According to the scammers, this required signing a contract on “Unemployd”, ostensibly a blockchain platform for licensing NFT-related intellectual property. However, after the victim approved the transaction, it turned out that all 14 Bored Ape NFTs belonging to them were sent to the malicious actor for a paltry 0.00000001 ETH (about US¢0.001 at the time).

The scam crypto transaction

What the request to sign the “contract” looked like (left), and what actually happened after the transaction was approved (right). Source

The scheme relied to a large extent on social engineering: the scammers courted the victim for more than a month with email messages, calls, fake legal documents, and so on. However, the centerpiece of this theft was the transaction that transferred the crypto assets into the scammers’ ownership, which they undertook at an opportune time. Such a transaction is what drainers rely on.

How crypto drainers work

Today’s drainers can automate most of the work of emptying victims’ crypto wallets. First, they can help to find out the approximate value of crypto assets in a wallet and identify the most valuable ones. Second, they can create transactions and smart contracts to siphon off assets quickly and efficiently. And finally, they obfuscate fraudulent transactions, making them as vague as possible, so that it’s difficult to understand what exactly happens once the transaction is authorized.

Armed with a drainer, malicious actors create fake web pages posing as websites for cryptocurrency projects of some sort. They often register lookalike domain names, taking advantage of the fact that these projects tend to use currently popular domain extensions that resemble one another.

Then the scammers use a technique to lure the victim to these sites. Frequent pretexts are an airdrop or NFT minting: these models of rewarding user activity are popular in the crypto world, and scammers don’t hesitate to take advantage of that.

These X (Twitter) ads promoted NFT airdrops and new token launches on sites that contain the drainer

These X (Twitter) ads promoted NFT airdrops and new token launches on sites that contain the drainer. Source

Also commonplace are some totally unlikely schemes: to draw users to a fake website, malicious actors recently used a hacked Twitter account that belonged to a… blockchain security company!

X (Twitter) ads for a supposedly limited-edition NFT collection on scam websites

X (Twitter) ads for a supposedly limited-edition NFT collection on scam websites. Source

Scammers have also been known to place ads on social media and search engines to lure victims to their forged websites. In the latter case, it helps them intercept customers of real crypto projects as they search for a link to a website they’re interested in. Without looking too closely, users click on the “sponsored” scam link, which is always displayed above organic search results, and end up on the fake website.

Scam sites containing crypto drainers in Google ads

Google search ads with links to scam websites containing crypto drainers. Source

Then, the unsuspecting crypto owners are handed a transaction generated by the crypto drainer to sign. This can result in a direct transfer of funds to the scammers’ wallets, or more sophisticated scenarios such as transferring the rights to manage assets in the victim’s wallet to a smart contract. One way or another, once the malicious transaction is approved, all the valuable assets get siphoned off to the scammers’ wallets as quickly as possible.

How dangerous crypto drainers are

The popularity of drainers among crypto scammers is growing rapidly. According to a recent study on crypto drainer scams, more than 320,000 users were affected in 2023, with total damage of just under $300 million. The fraudulent transactions recorded by the researchers included around a dozen — worth more than a million dollars each. The largest value of loot taken in a single transaction amounted to a little over $24 million!

Curiously, experienced cryptocurrency users fall prey to scams like this just like newbies. For example, the founder of the startup behind Nest Wallet was recently robbed of $125,000 worth of stETH by scammers who used a fake website promising an airdrop.

How to protect against crypto drainers

  • Don’t put all your eggs in one basket: try to keep only a portion of your funds that you need for day-to-day management of your projects in hot crypto wallets, and store the bulk of your crypto assets in cold wallets.
  • To be on the safe side, use multiple hot wallets: use one for your Web3 activities — such as drop hunting, use another to keep operating funds for these activities, and transfer your profits to cold wallets. You’ll have to pay extra commission for transfers between the wallets, but malicious actors would hardly be able to steal anything from the empty wallet used for airdrops.
  • Keep checking the websites you visit time and time again. Any suspicious detail is a reason to stop and double-check it all again.
  • Don’t click on sponsored links in search results: only use links in organic search results – that is, those that aren’t marked “sponsored”.
  • Review every transaction detail carefully.
  • Use companion browser extensions to verify transactions. These help identify fraudulent transactions and highlight what exactly will happen as a result of the transaction.
  • Finally, be sure to install reliable security on all devices you use to manage crypto assets.
Protection from crypto threats in Kaspersky solutions

How protection from crypto threats works in Kaspersky solutions

By the way, Kaspersky solutions offer multi-layered protection against crypto threats. Be sure to use comprehensive security on all your devices: phones, tablets, and computers. Kaspersky Premium is a good cross-platform solution. Check that all basic and advanced security features are enabled and read our detailed instructions on protecting both hot and cold crypto wallets.

]]>
full large medium thumbnail
The cybersecurity threats to kids that parents should be aware of in 2024 | Kaspersky official blog https://www.kaspersky.com/blog/cybersecurity-threats-for-kids-2024/50188/ Wed, 17 Jan 2024 08:00:43 +0000 https://www.kaspersky.com/blog/?p=50188 In the era of modern technology, the age at which children are introduced to the digital world and technology is increasingly lower. This digital experience, however, can be marred by potential risks lurking online. As technology continues to advance, the tactics and strategies used by cybercriminals to target and exploit young internet users are also evolving.

Therefore, it’s crucial for parents to stay informed about the latest cybersecurity threats targeting kids to better protect them from potential harm. In this post, my colleague, Anna Larkina, and I explore some of the key cybersecurity trends that parents should be aware of and provide tips on how to safeguard their children’s online activities.

AI threats

AI is continuing to revolutionize various industries, and its daily use ranges from chatbots and AI wearables to personalized online shopping recommendations — among many other common uses. And of course, such global trends pique the interest and curiosity of children, who can use AI tools to do their homework or simply chat with AI-enabled chatbots. According to a UN study, about 80 percent of youths that took part in its survey claimed that they interact with AI multiple times a day. However, AI applications can pose numerous risks to young users involving data privacy loss, cyberthreats, and inappropriate content.

With the development of AI, numerous little-known applications have emerged with seemingly harmless features such as uploading a photo to receive a modified version — whether it be an anime-style image or simple retouching. However, when adults, let alone children, upload their images to such applications, they never know in which databases their photos will ultimately remain, or whether they’ll be used further. Even if your child decides to play with such an application, it’s essential to use them extremely cautiously and ensure there’s no personal information that may identify the child’s identity — such as names, combined with addresses, or similar sensitive data — in the background of the photo, or consider avoiding using such applications altogether.

Moreover, AI apps – chatbots in particular – can easily provide age-inappropriate content when prompted. This poses a heightened risk as teenagers might feel more comfortable sharing personal information with the chatbot than with their real-life acquaintances, as evidenced by instances where the chatbot gave advice on masking the smell of alcohol and pot to a user claiming to be 15. On an even more inappropriate level, there are a multitude of AI chatbots that are specifically designed to provide an “erotic” experience. Although some require a form of age verification, this is a dangerous trend as some children might opt to lie about their age, while checks of real age are lacking.

It is estimated that on Facebook Messenger alone, there are over 300,000 chatbots in operation. However, not all of them are safe, and may carry various risks, like the ones mentioned earlier. Therefore, it is extremely important to discuss with children the importance of privacy and the dangers of oversharing, as well as talking to them about their online experiences regularly. It also reiterates the significance of establishing trusting relationships with one’s children. This will ensure that they feel comfortable asking their parents for advice rather than turning to a chatbot for guidance.

Young gamers under attack

According to statistics, 91 percent of children in the UK aged 3-15 play digital games on devices. The vast world of gaming is open to them — also making them vulnerable to cybercriminals’ attacks. For instance, in 2022, our security solutions detected more than seven million attacks relating to popular children’s games, resulting in a 57 percent increase in attempted attacks compared to the previous year. The top children’s games by the number of users targeted even included games for the youngest children — Poppy Playtime and Toca Life World — which are designed for children 3-8-years old.

What raises even more concerns is that sometimes children prefer to communicate with strangers on gaming platforms rather than on social media. In some games, unmoderated voice and text chats form a significant part of the experience. As more young people come online, criminals can build trust virtually, in the same way as they would entice someone in person — by offering gifts or promises of friendship. Once they lure a young victim by gaining their trust, cybercriminals can obtain their personal information, suggesting they click on a phishing link, download a malicious file onto their device disguised as a game mod for Minecraft or Fortnite, or even groom them for more nefarious purposes. This can be seen in the documentary series “hacker:HUNTER“, co-produced by Kaspersky, as one of the episodes revealed how cybercriminals identify skilled children through online games and then groom them to carry out hacking tasks.

The number of ways to interact within the gaming world is increasing, and now includes voice chats as well as AR and VR games. Both cybersecurity and social-related threats remain particular problems in children’s gaming. Parents must remain vigilant regarding their children’s behavior and maintain open communication to address any potential threats. Identifying a threat involves observing changes, such as sudden shifts in gaming habits that may indicate a cause for concern. To keep your child safe by stopping them downloading malicious files during their gaming experience, we advise installing a trusted security solution on all their devices.

Fintech for kids: the phantom menace

An increasing number of banks are providing specialized products and services designed for children — including bank cards for kids as young as 12. This gives parents helpful things like the ability to monitor their child’s expenditures, establish daily spending limits, or remotely transfer funds for the child’s pocket money.

Yet, by introducing banking cards for children, the latter can become susceptible to financially motivated threat actors and vulnerable to conventional scams, such as promises of a free PlayStation 5 and other similar valuable devices after entering card details on a phishing site. Using social engineering techniques, cybercriminals might exploit children’s trust by posing as their peers and requesting card details or money transfers to their accounts.

As the fintech industry for children continues to evolve, it’s crucial to educate children not only about financial literacy but also the basics of cybersecurity. To achieve this, you can read Kaspersky Cybersecurity Alphabet together with your child. It’s specifically designed to explain key online safety rules in a language easily comprehensible for children.

To avoid concerns about a child losing their card or sharing banking details, we recommend installing a digital NFC card on their phone instead of giving them a physical plastic card. Establish transaction confirmation with the parent if the bank allows it. And, of course, the use of any technical solutions must be accompanied by an explanation of how to use them safely.

Smart home threats for kids

In our interconnected world, an increasing number of devices — even everyday items like pet feeders — are becoming “smart” by connecting to the internet. However, as these devices become more sophisticated, they also become more susceptible to cyberattacks. This year, our researchers conducted a vulnerability study on a popular model of smart pet feeder. The findings revealed a number of serious security issues that could allow attackers to gain unauthorized access to the device and steal sensitive information such as video footage — potentially turning the feeder into a surveillance tool.

Despite the increasing number of threats, manufacturers are not rushing to create cyber-immune devices that preemptively prevent potential exploits of vulnerabilities. Meanwhile, the variety of different IoT devices purchased in households continues to grow. These devices are becoming the norm for children, which also means that children can become tools for cybercriminals in an attack. For instance, if a smart device becomes a fully functional surveillance tool and a child is home alone, cybercriminals could contact them through the device and request sensitive information such as their name, address, or even their parents’ credit card number and times when their parents are not at home. In a scenario such as this one, beyond just hacking the device, there is a risk of financial data loss or even a physical attack.

As we cannot restrict children from using smart home devices, our responsibility as parents is to maximize the security of these devices. This includes at least adjusting default security settings, setting new passwords, and explaining basic cybersecurity rules to children who use IoT devices.

I need my space!

As kids mature, they develop greater self-awareness, encompassing an understanding of their personal space, privacy, and sensitive data, both offline and in their online activities. The increasing accessibility of the internet means more children are prone to becoming aware of this. Consequently, when a parent firmly communicates the intent to install a parenting digital app on their child’s devices, not all children will take it calmly.

This is why parents now require the skill to discuss their child’s online experience and the importance of parenting digital apps for online safety while respecting the child’s personal space. This involves establishing clear boundaries and expectations, discussing the reasons for using the app with the child. Regular check-ins are also vital, and adjustments to the restrictions should be made as the child matures and develops a sense of responsibility. Learn more in our guide on kids’ first gadgets, where, together with experienced child psychologist Saliha Afridi, our privacy experts analyze a series of important milestones to understand how to introduce such apps into a child’s life properly and establish a meaningful dialogue about cybersecurity online.

Forbidden fruit can be… malicious

If an app is unavailable in one’s home region, a child may start looking for an alternative, but this alternative is often only a malicious copy. Even if they turn to official app stores like Google Play, they still run the risk of falling prey to cybercriminals. From 2020 to 2022, our researchers found more than 190 apps infected with the Harly Trojan on Google Play, which signed users up for paid services without their knowledge. A conservative estimate of the number of downloads of these apps is 4.8 million, while the actual figure of victims may be even higher.

Children are not the only ones following this trend; adults are as well, which was highlighted in our latest consumer cyberthreats predictions report as a part of the annual Kaspersky Security Bulletin. That’s why it’s crucial for kids and their parents to understand the fundamentals of cybersecurity. For instance, it’s important to pay attention to the permissions that an app requests when installing it: a simple calculator, for instance, shouldn’t need access to your location or contact list.

How to keep kids safe?

As we can see, many of the trends that are playing out in society today are also affecting children, making them potential targets for attackers. This includes both the development and popularity of AI and smart homes, as well as the expansion of the world of gaming and the fintech industry. Our experts are convinced that protecting children from cybersecurity threats in 2024 requires proactive measures from parents:

  • By staying informed about the latest threats and actively monitoring their children’s online activities, parents can create a safer online environment for their kids.
  • It’s crucial for parents to have open communication with their children about the potential risks they may encounter online and to enforce strict guidelines to ensure their safety.
  • With the right tools such as Kaspersky Safe Kids, parents can effectively safeguard their children against cyberthreats in the digital age.
  • To help parents introduce their children to cybersecurity amid the evolving threat landscape, our experts have developed the above-mentioned Kaspersky Cybersecurity Alphabet, with key concepts from the cybersecurity industry. In this book, your child can get to know about new technologies, learn the main cyber hygiene rules, find out how to avoid online threats, and recognize fraudsters’ tricks. After reading this book together, you’ll be sure that your offspring knows how to distinguish a phishing website, how VPN and QR-codes work, and even what encryption and honeypots are and what role they play in modern cybersecurity. You can download the pdf version of the book and also the Kaspersky Cybersecurity Alphabet poster for free.
]]>
full large medium thumbnail
Malware, fake specs, and other problems with cheap Android devices | Kaspersky official blog https://www.kaspersky.com/blog/how-to-avoid-threats-from-budget-android-devices/49565/ Tue, 07 Nov 2023 08:42:20 +0000 https://www.kaspersky.com/blog/?p=49565 The temptation to save money when buying expensive devices is, well, tempting — gadgets from little-known brands can offer the same spec at a fraction of the price of more popular makes, while having an Android set-top box or Android TV can cut costs on a range of subscriptions.

Unfortunately, cheap devices — much like a free lunch — often come with a catch, so it’s important to do your research before buying.

Malicious surprise

The most unwanted “gift” sometimes found in cheap, no-name Android devices is pre-installed malware. It’s not entirely clear whether bad actors install it directly at the factory, whether it happens on the way to the store, or whether manufacturers carelessly use trojanized third-party firmware, but as soon as you open the box and activate the new device, the malware springs into action. This type of infection is extremely dangerous.

  • The Trojan is difficult to detect and almost impossible to remove. It’s integrated right in the device’s firmware and has system privileges. Special know-how and software are needed to find and remove it, but even then there’s no guarantee that the malware will be gone for good and won’t reactivate.
  • Attackers have full access to the device and data. Without needing either permissions or requests, they can steal information, intercept authentication codes, install additional programs, and so on.

Cybercriminals make money from such pre-infected devices in various ways, all of which cause harm to the buyer.

  • Ad fraud. The device displays ads — often stealthily in an invisible window. As part of the fraud, additional software may be installed on the device, which simulates the actions of a user interested in a particular ad. For the device owner, this results in slow operation and clutters the memory of their new smartphone or set-top box.
  • Data theft and account hijacking. Cybercriminals have no problem intercepting passwords, messages, bank card numbers, authentication codes, geolocation data, or any other useful information passing through the infected device. Some of this is used for “marketing” (that is, targeted advertising), and some is used for other fraudulent schemes.
  • Running proxies. Cybercriminals can enable a proxy server on the infected device, through which outsiders can access the internet pretending to be the victim, and hiding their tracks and real IP addresses. As a result, the device owner can suffer serious internet slowdown, end up on various denylists, and even attract the attention of law enforcement agencies.
  • Creating online accounts, such as on WhatsApp or Gmail. These accounts are then used for spamming, and the device owner may face anti-spam restrictions and blocks imposed by these services on the device or the entire home network.

Alas, the above scenarios are in no way rare. In the most recent case this year, around 200 models of Android devices were found infected with the Badbox fraud scheme. These were mostly cheap TV set-top boxes under various brands sold online or in electronics hypermarkets, but there were also tablets and smartphones, including gadgets purchased for schools. Experts detected the Triada Trojan on all of them. This Android malware was first discovered by Kaspersky analysts back in 2016, and even then it was described as one of the most sophisticated on the Android platform. It goes without saying that its developers have not been sitting on their hands all these years. Badbox uses infected devices for ad fraud and running proxies.

Last year, the Lemon Group was found to be engaged in ad fraud — 50 different brands of Android devices were infected with the Guerrilla Trojan. In 2019, Google highlighted a similar case, but without mentioning specific manufacturers or the number of infected device models involved. Meanwhile, the largest incident of this kind occurred in 2016 and affected up to 700 million smartphones, which were used for data theft and ad fraud.

Interesting fact: Trojan functionality even managed to get inside dumb phones. Threat actors “trained” them to send texts on command from a central server (for example, to subscribe to paid services) and to forward incoming texts to their own servers, which made it possible to use the numbers of push-button phones to register for services that require confirmation by text.

Fake specs

The second problem with cheap Android devices from unknown manufacturers is the discrepancy between the stated specification and the actual “filling”. Sometimes this arises due to a hardware design error. For example, a high-speed Wi-Fi adapter may be connected to a slow USB 2.0 bus making the declared data transfer speed physically unattainable; or, due to a firmware bug, the promised HDR video mode doesn’t work.

And sometimes it’s a case of an obvious fake, such as when a device promising 4GB of RAM and 4K resolution in reality works with only 2GB and offers not even HD but 720p image quality.

Support issues and security threats

Even if a third-tier Android device is not infected with malware out of the box, the security risks are greater than for well-known brands. Android always needs updating, and Google fixes vulnerabilities and releases patches every month, but these apply only to pure Android (AOSP) and Google Pixel devices. For all other versions of the operating system, updates are the responsibility of the manufacturer of the specific device, and many are slow to update the firmware — if at all. Therefore, even on a new gadget you might find the outdated Android 10, and after just a couple of years of use all the software installed on it will belong in a museum.

How to combine economy and security

We’re not advising users to buy only expensive gadgets — not everyone wants or can do this. But when opting for a budget device, it pays to take extra precautions:

  • Choose brands that have been around for a while and are sold actively in many countries — even if they’re not so well-known.
  • If you’ve never heard of a particular manufacturer, don’t spend your time online reading about a specific model of set-top box, TV, or phone — but about the company itself.
  • Study the company’s website and check that the support section has contact details, service information, and — most importantly — firmware updates with download instructions.
  • Read buyer reviews on specialized forums — not on marketplaces or store websites. Pay special attention to the correlation between the stated and real specification, availability of updates, and odd or suspicious device behavior.
  • If you have an opportunity to see the device live in action in a store, do so. There, go to the settings and see if there’s an option to install updates. And also check how old the installed Android is. Anything below version 12 can be considered outdated.
  • Compare the price of the device you fancy with well-known Chinese brands such as Huawei or Xiaomi. Lesser-known but high-quality devices with similar specs might be as little as half the price of “renowned” Chinese brands — but a severalfold difference is suspicious.
  • As soon as you buy the device, familiarize yourself with its settings, update the firmware to the latest version, then uninstall or disable through the settings all apps that seem surplus to requirements.
  • For devices that allow app installs, install full Android protection immediately after purchase and activation.
]]>
full large medium thumbnail
What's new in Kaspersky Safe Kids in 2023 | Kaspersky official blog https://www.kaspersky.com/blog/safe-kids-2023-updated-features/47957/ Wed, 19 Apr 2023 14:09:13 +0000 https://www.kaspersky.com/blog/?p=47957 Around 50% of children spend a whopping three to five hours a day using their gadgets. And no matter which strategy parents adopt — limit, recommend or simply observe — they cannot ignore the way their offspring use gadgets, or the impact those gadgets have on their children’s health, mental or physical development, or wellbeing. We have long offered Kaspersky Safe Kids to help parents with this. The application gives parents just what they need: knowing where their children are, and keeping track of screen time and the content their children are interested in; it also provides flexible tools for channeling children’s energy in the right direction.

Parenting is becoming harder with every new year: the number of gadgets kids use is increasing, while younger brothers and sisters seem to catch up with astonishing speed — leading to their getting devices of their own. And Kaspersky Safe Kids needs to keep up with the times — to keep on developing and improving; which it does! An updated version of Kaspersky Safe Kids for both Android and iOS is now available to all users, and it makes parents’ busy lives noticeably easier.

Keeping your kids where you can “see” them

The Kaspersky Safe Kids main screen on the parent’s phone provides instant visibility of kids’ current status.

Parents’ most frequent concerns are their kids’ safety and location, so the main screen puts a map front and center. This displays all kids’ devices and their battery levels so you can tell at a glance where your offspring are, or if it’s about time you called them to tell them to go find a charger.

Settings for your kids and their devices are arranged as a strip with two rows: you select the child’s name above and the device, below.

Updated Kaspersky Safe Kids main screen.

Updated Kaspersky Safe Kids main screen.

The main screen is composed of widgets, which you can move around to suit your digital parenting priorities. If you’re mainly concerned about what a child is doing most of all on their phone, you can raise the screen time counter to the top and track the apps most time is spent on.

How does your child use their screen time?

How does your child use their screen time?

If your priority is keeping age-restricted content away, a tap of a button right on the main screen will display the My Kaspersky website with detailed reports on websites visited and videos watched.

These are some of the customizable widgets:

  • child’s requests for app launches and website visits
  • total screen time
  • detailed activity graph showing when various gadgets (phones, computers, and tablets) were used
  • favorite apps and time spent in these
  • web-search history
  • website-browsing history
  • YouTube search and watch history
  • remote device block with one tap (except calls and allowed apps)

Tapping on a widget opens settings or detailed reports.

New tips

According to many parents, tried-and-tested advice is their biggest helper: tips from pundits, teachers and psychologists, and lifehacks from other parents. Kaspersky Safe Kids has long provided expert recommendations on how to talk to children about cutting down on screen time, obnoxious content on the web, and so on. The tips now cover a broader range of topics and use vivid icons on the home screen below the map (Instagram stories, anyone?), so you are sure to notice them.

Updated Kaspersky Safe Kids options

Updated Kaspersky Safe Kids options.

Being honest with your kids

We are unwavering supporters of honest conversations between parents and their kids when it comes to setting limits — a subject that hardly any children have much enthusiasm for. The kids’ version of the app used to have only an administrative interface, which parents alone could log in to. The latest version of Kaspersky Safe Kids updates kids’ main screen experience: the child can now check how much screen time is left for the day, and how parents responded to their requests to visit a website or open an app. We’ve made the main screen look a bit playful too — something that kids between the ages of six and twelve tend to like.

Updated main screen for children in Kaspersky Safe Kids

Updated main screen for children in Kaspersky Safe Kids.

In general, the updated mobile versions for both iOS and Android have become much more convenient while retaining the most important thing — reliable protection — thanks to which Kaspersky Safe Kids has received the Approved Parental Control Software certificate from independent test laboratory AV-TEST for seven years in a row, blocking almost 100% of inappropriate content.

By the way, instead of just blocking every website or type of content, we recommend that you talk to your child about building healthy digital habits. Let’s be honest — those are something we as adults could use too.

PS: Kaspersky Safe Kids is now included for free for a full year’s use in our new Premium subscription!

]]>
full large medium thumbnail
Kaspersky VPN wins AV-TEST’s performance test | Kaspersky official blog https://www.kaspersky.com/blog/ksc-wins-av-test-2022/46586/ Wed, 14 Dec 2022 14:06:43 +0000 https://www.kaspersky.com/blog/?p=46586 For the third year in a row, Kaspersky VPN Secure Connection has participated in the public VPN package certification tests conducted by AV-TEST, the independent research institute for IT security based in Magdeburg, Germany, and once again for the third year in a row it has received the “Approved Virtual Private Network Solution” badge.

Curiously, there has been a constant fall in the number of participants whose solutions reached the end of the tests and were certified. Specifically: in 2020 there were six VPN packages, in 2021 – three, and finally, this year there were only two left: Kaspersky VPN Secure Connection and Norton Secure VPN (bringing to mind the “Highlander” movie tagline: “There can only be one”). The reason is that participants in a public certification test may choose not to have their results published if either they’re not satisfied with them or their products don’t meet the certification requirements.

This certification test evaluates many factors that affect usage of a VPN: usability, OS compatibility, server locations, upload and download speeds, security, transparency, etc. We were confident in both the stability and security level provided by our VPN solution, so were most interested in the additional performance comparisons with the other VPN products available on the market. Having that in mind, and before the public certification test was started, we asked AV-TEST to test an additional six VPN solutions fully in parallel with the public certification performance test and in full accordance with its methodology. As a result, an extended performance comparative report was published.

The participating VPNs were tested for both their download and upload performance, torrent download performance, YouTube streaming, and measured latency at three geographic locations – the U.S. West Coast, the Netherlands, and Japan. All tests were conducted for the “best” local connection as well as two geographic overseas connections. With the public certification test, there was only one difference: the results achieved by any of the tested products were not excluded.

The Magnificent Seven were: Avast SecureLine VPN, ExpressVPN, Kaspersky VPN Secure Connection, Mullvad VPN, NordVPN, Norton Secure VPN, and Private Internet Access.

AV-TEST performed this test in parallel for all products several times a day for a week, which allowed them to average the results of the performance test. So, let’s dive deep into the numbers and see how modern VPN solutions outperform good old dial-up access or ADSL.

The Speedtest, or There and Back Again

For most VPN use cases in everyday life, download speed is more important than upload speed, but for a full-fledged test, one needs to analyze both. The performance test used virtual machines configurations hosted in the Microsoft Azure cloud, and all products were run with their default, out-of-the-box configurations. And trust me, those VMs were pretty good and had a stable and very, very fast internet connection, with reference unencrypted download and upload speeds of up to 9Gbps. Yes, gigabits per second!

The first set of tests – download, upload and latency performances – were conducted using the “industry standard” Ookla LLC speedtest.net command-line application and compared to an unencrypted reference speed benchmark. They show how fast you can surf the web anonymously all around the globe, since all these tests were run for both local and overseas locations, as pictured below.

The comparative test local and overseas connections map

The comparative test local and overseas connections map

Due to the nature of the technology, using a VPN connection almost always reduces performance, and in the graph below we do see a significant drop in speed compared to an unsecured connection. Such is the price of anonymity. But let’s be honest: compare these values with the bandwidth provided by your ISP, and you’ll realize that in most cases you’ll never notice a drop in speed since your connection is still slower.

The comparative local and overseas results for the unencrypted download and upload performances, and the industry averages for all three tested locations (the more the better)

The comparative local and overseas results for the unencrypted download and upload performances, and the industry averages for all three tested locations (the more the better)

However, “average performance” is good for statistics and comparisons, but in everyday life you’re unlikely to prefer an “average car”. So let’s compare how fast all the test participants are, and we’ll see a noticeable difference in performance between the winners and the rest:

The combined averages for download and upload performances for local and overseas connected VPN servers (the more the better)

The combined averages for download and upload performances for local and overseas connected VPN servers (the more the better)

Kaspersky wins almost every race, with the exception of overseas upload average performance due to an unpatched server issue that had not been fixed at the time of testing.

The Latency of Clouds

– What do we want?
– When do we want it?
– Lag-free gaming!
– Now!

Ping time is vital for gamers. If the ping is too slow, players can react as fast as they want, but their reaction in the games they play doesn’t get through due to latency. In the public certification test, Kaspersky VPN showed an average local latency of 5.3 milliseconds across all locations compared to Norton Secure VPN’s average latency of 13 milliseconds (Boom! You lose). In the extended comparative test it shared second place in the local latency test with NordVPN – just behind Mullvad VPN – thus beating the industry average for VPNs. The overseas latency test showed no differences worth mentioning when comparing VPN products, as well as when comparing them to an unencrypted reference.

Leechers in the dark

Where do we value privacy the most? With good old torrents, for sure. Therefore, the speed of leeching through a VPN tunnel is critical for all torrent lovers. The test measured the time between the start of downloading a torrent and the end of writing the torrent file to the hard drive through a third-party torrent client.

The combined averages for the torrent download speeds (the more the better). * Norton Secure VPN doesn't support torrents on tested servers

The combined averages for the torrent download speeds (the more the better). * Norton Secure VPN doesn’t support torrents on tested servers

And here Kaspersky VPN Secure Connection wins again for both local and overseas torrent leeching.

The Tubes Burst

We all love Netflix. Video streaming services are booming and now they generate most internet traffic. And it is these video streaming services that like to annoy the user with geo-blocking the most – but which can be bypassed with a VPN! Therefore, a stable video stream without frame loss and delays through VPN plays such an important role. And here’s some good news for you movie buffs: Kaspersky VPN Secure Connection, as well as all other tested solutions, successfully passed the 4K streaming test with minor issues that users won’t notice (such as a few dropped frames and millisecond range lags).

Over Hill and Under Hill

However, in addition to the performance, it’s worth comparing other parameters of the tested solutions in the public certification test. Compared to Norton Secure VPN, Kaspersky VPN Secure Connection – using the OpenVPN protocol – supports more operating systems (including Linux, ChromeOS, AndroidTV and FireTV), has three times more server locations (90 vs. 29), and successfully passed all transparency tests. But what’s crucial is that Kaspersky VPN Secure Connection showed impeccable leak resistance in all security tests, while Norton Secure VPN allowed a DNS leak on reconnect, briefly exposing the device’s DNS queries.

Transparency is a precious thing

In terms of transparency and confidentiality, Kaspersky takes the privacy of its customers very seriously: the solution doesn’t collect more data than necessary, uses the highest industry standards to secure collected data, and the company regularly gets audited and publishes transparency reports.

Product security is ensured by Kaspersky’s vulnerability management and disclosure program, including its Bug Bounty Program. Kaspersky is also known as a pioneer in the creation of Transparency Centers all over the world to allow independent assessments of the company’s solutions’ security and safety.

The Last Stage

With unmatched performance in most of the speed tests conducted, Kaspersky VPN Secure Connection is the overall fastest VPN product tested in the performance test, and ranked #1 in most of the categories tested. It showed outstanding download and torrent speed in both local and overseas scenarios. In particular, the solution outperforms other participants at least two-fold in terms of overseas data transmission. Kaspersky VPN Secure Connection is the leading product for the local upload test, with results that are twice the industry average.

The measured latency is in the top three among all tested products. Like all other products, Kaspersky VPN Secure Connection had no problems playing 4K video from a local or overseas connection. It’s expected that once the problem with unpatched overseas servers is fixed, the performance of overseas uploads, which is currently below average, will improve significantly. Kaspersky VPN Secure Connection successfully passed all security tests and was awarded the “Approved Virtual Private Network Solution” badge.

 

And finally, if you’re a number-cruncher, you should definitely check out both reports’ PDFs here and here, where you’ll find plenty of crisp numbers to crunch.

]]>
full large medium thumbnail
Topics to expect at Black Hat 2022 | Kaspersky official blog https://www.kaspersky.com/blog/black-hat-2022-preview/45108/ Tue, 09 Aug 2022 15:31:14 +0000 https://www.kaspersky.com/blog/?p=45108 With Black Hat 2022 kicking off this week, we wanted to check in with some of our Kaspersky Global Research and Analysis Team (GReAT) members to see what they’re most looking forward to. What sessions are they hoping to attend? What new trends will emerge? What hot topics are missing from the event this year?

Kurt Baumgartner, principal security researcher

The first thing that’s piqued my attention coming up in Black Hat 2022 is Kim Zetter’s keynote “Pre-Stuxnet, Post-Stuxnet: Everything Has Changed, Nothing Has Changed.” Of course, Stuxnet changed things, but her perspective on ongoing security issues in light of past events and consequences should be fantastic.

The vast majority of talks this year are on offensive operations. There are also more than a handful of talks on “cyber-physical systems,” including Siemens’ devices, automotive remote keyless entry, secure radio communications and more. Some of the technical wizardry and its implications have become more alarming, and since Stuxnet – more understandable to the general audience.

A couple of other talks look particularly interesting due to the use of novel exploitation techniques and implications for large scale authentication schemes from well-known offensive researchers: “I Am Whoever I Say I Am: Infiltrating Identity Providers Using a 0Click Exploit” and “Elevating Kerberos to the Next Level.”

I would’ve expected to see more offensive talks on attacking various machine-learning technologies and offensive cryptocurrency research.

Giampaolo Dedola, senior security researcher

I’m glad that many Black Hat briefings reflect what Kaspersky experts foresaw in their APT predictions for 2022, confirming our insights on the current state of cybersecurity.

Several talks deserve special attention – related to and covering this year’s disruptive attacks and the geopolitical crisis in Ukraine. Since such topics are an essential part of the agenda, it confirms a strict interrelation between the digital and real world, and that cybersecurity is becoming even more relevant for ensuring physical safety.

This trend will expand in the future, as cyberattacks are already reaching targets beyond our planet, such as the attacks against ViaSat satellites and Starlink.

Finally, Black Hat will touch upon a growing issue: the ethics of how a government could exploit cyber operations to fabricate evidence to frame and incarcerate vulnerable opponents.

Jornt van der Wiel, senior security researcher

Black Hat’s interesting schedule covers a variety of topics related to exploitation of devices, systems, and certain equipment that’s not easily updated. As for research, it will be useful to learn about new methods of mobile GPU exploitation on Android. Another interesting issue is the novel vulnerabilities and exploitation techniques that reliably bypass Linux syscall tracing. I’m also looking forward to “Breaking Firmware Trust From Pre-EFI: Exploiting Early Boot Phases,” as it should elaborate on UEFI firmware, a recent hot theme due to its allowing malware to run even after the system is reinstalled.

We expect that some of these vulnerabilities and exploits that are “harder to patch on all devices” will be abused by cybercriminals and appear in the wild soon.

Boris Larin, lead security researcher

I expect in-the-wild zero-days and microarchitectural/firmware threats to be the key topics of the conference. In the last few years, with the help of our technologies, we’ve discovered more than a dozen actively exploited zero-day exploits used by different APTs (MysterySnail, PuzzleMaker, WizardOpium), and a number of novel UEFI rootkits (CosmicStrand, MoonBounce, FinSpy, MosaicRegressor).

Our findings show that these threats are becoming more relevant than ever. Attacks using such sophisticated techniques are becoming more common and widespread. Personally, I’m really looking forward to a number of presentations dedicated to these topics, such as: “Monitoring Surveillance Vendors: A Deep Dive into In-the-Wild Android Full Chains in 2021,” “Architecturally Leaking Data from the Microarchitecture” and “Do Not Trust the ASA, Trojans!

If you’re also attending Black Hat this year, let us know what topics and talks you’re most looking forward to. You can find more insights and reports from our experts on Securelist.

]]>
full large medium thumbnail
Industrial cybersecurity in 2020 | Kaspersky official blog https://www.kaspersky.com/blog/industrial-cybersecurity-2020/37031/ Wed, 16 Sep 2020 04:15:34 +0000 https://www.kaspersky.com/blog/?p=37031 Every security officer views remote connections to corporate systems as a potential threat. For infosec experts at industrial enterprises, and especially at critical infrastructure facilities, the threat feels very real.

You can’t blame them for being cautious. Industrial enterprises, for which downtime can mean damage in the millions of dollars, are tempting targets for cybercriminals of all stripes. Ransomware operators are constantly on the lookout for open RDP connections they can use to infect industrial systems. Employees with publicly known e-mail addresses often receive phishing emails with links to Trojans that provide remote access to attackers. Cybercriminals also keep an eye on HVAC operators, which sometimes connect remotely to the heating, ventilation, and air conditioning systems that operate in industrial environments.

And that was before 2020. With its pandemic, varying measures of self-isolation, and global switch to remote working, this year could hardly fail to recalibrate the work of infosec departments. With that in mind, our colleagues decided to learn more about how new conditions are affecting information security, including priorities and approaches, at industrial enterprises. That entailed interviewing cybersecurity decision-makers and policy-influencers at industrial companies worldwide.

Here is what they found: More than half (53%) of respondents admitted that the pandemic has caused a shift to more staff members working from home, which has become a kind of stress test for infosec services. Because of the huge number of external connections, the vast majority of companies are now carrying out periodic assessments of the security level of OT networks (all but 5% of those surveyed had such plans). Many have had to rethink their general approach to perimeter protection; it has become clear that segmentation and workstation protection are no longer enough. Only 7% of respondents stated that their cybersecurity strategy had been reasonably effective during the pandemic.

To find out more about the results of the study, download the full report, “The state of industrial cybersecurity in the era of digitalization.” In addition to explaining how the pandemic has affected the work of industrial security officers, it provides insight into who influences security decisions and how, who the drivers of innovation are, and, above all, the problems cybersecurity departments faced in 2020.

]]>
full large medium thumbnail