independent tests – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Fri, 02 Feb 2024 12:44:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png independent tests – Kaspersky official blog https://www.kaspersky.com/blog 32 32 Kaspersky Standard wins Product of the Year award from AV-Comparatives | Kaspersky official blog https://www.kaspersky.com/blog/kaspersky-product-of-the-year-2023-av-comparatives/50292/ Tue, 23 Jan 2024 11:12:15 +0000 https://www.kaspersky.com/blog/?p=50292 Great news! The latest generation of our security solutions for home users has received a Product of the Year 2023 award. It’s the result of extensive multi-stage testing conducted by independent European test lab AV-Comparatives over the course of 2023, which examined and evaluated 16 security solutions from popular vendors. Here’s what this victory means, what it consists of, how the testing was done, and what other awards we picked up.

Kaspersky Standard named Product of the Year 2023 by AV-Comparatives

Our Kaspersky Standard security solution was named Product of the Year 2023 after in-depth testing by AV-Comparatives

What does “Product of the Year” actually mean?

The tests were carried out on our basic security solution for home users — Kaspersky Standard — but its outstanding results apply equally to all our endpoint products. The reason is simple: all our solutions use the same detection and protection technologies stack that was thoroughly tested by AV-Comparatives.

Thus, this top award, Product of the Year 2023, applies equally to our more advanced home protection solutions — Kaspersky Plus and Kaspersky Premium — and also our business products, such as Kaspersky Endpoint Security for Business and Kaspersky Small Office Security.

So what does it take to earn the coveted Product of the Year title?

A security solution needs to take part in seven tests throughout the year and consistently achieve the highest Advanced+ score in each of them. These tests examine the quality of protection against common threats and targeted attacks, resistance to false positives, and the impact on overall system performance. This golden triad of metrics forms the basis of a comprehensive evaluation of security solution performance.

That the testing is continuous over the course of a year is important since malware developers hardly sit around twiddling their thumbs — new threats emerge all the time, and existing ones evolve with breathtaking speed. Consequently, security solution developers must keep moving forward at the same pace. That’s why assessing performance at a single point in time is misleading — to get a true picture of a solution’s effectiveness requires extensive and repeated testing all year long. Which is precisely what AV-Comparatives does.

AV-Comparatives examined 16 security solutions from the largest vendors in its tests. Winning such a significant contest clearly demonstrates the highest level of protection provided by our products.

AV-Comparatives 2023 Test Participants

The seven rounds of tests — some of which individually lasted several months — that our protection took part in to eventually win the Product of the Year award were the following:

  1. March 2023: Malware Protection Test spring series
  2. April 2023: Performance Test spring series
  3. February–May 2023: Real-World Protection Test first series
  4. September 2023: Malware Protection Test autumn series
  5. September–October 2023: Advanced Threat Protection Test
  6. October 2023: Performance Test autumn series
  7. July–October 2023: Real-World Protection Test second series

To earn AV-Comparatives’ Product of the Year title, a security solution needs to get the highest score in each stage of testing. And our product rose to the challenge: in each of the tests listed above, Kaspersky Standard scooped the top score — Advanced+.

AV-Comparatives awards received by Kaspersky in 2023 interim tests

The Product of the Year award went to Kaspersky Standard based on top marks in all seven of a series of AV-Comparatives’ tests in 2023

How AV-Comparatives tests security solutions

Now for a closer look at AV-Comparatives’ testing methodology. The different tests evaluate the different capabilities of the security solutions taking part.

Malware Protection Test

AV-Comparatives awards received by Kaspersky in 2023 interim Malware Protection tests

This test examines the solution’s ability to detect prevalent malware. In the first phase of the test, malicious files (AV-Comparatives uses just over 10,000 malware samples) are written to the drive of the test computer, after which they’re scanned by the tested security solution — at first offline, without internet access, and then online. Any malicious files that were missed by the protective solution during static scanning are then run. If the product fails to prevent or reverse all the malware’s actions within a certain time, the threat is considered to have been missed. Based on the number of threats missed, AV-Comparatives assigns a protection score to the solution.

Also during this test, the security solutions are evaluated for false positives. High-quality protection shouldn’t mistakenly flag clean applications or safe activities. After all, if one cries wolf too often, the user will begin to ignore the warnings, and sooner or later malware will strike. Not to mention that false alarms are extremely annoying.

The final score is based on these two metrics. An Advanced+ score means reliable protection with a minimum of false positives.

Real-World Protection Test

AV-Comparatives awards received by Kaspersky in 2023 interim Real-World Protection tests

This test focuses on protection against the most current web-hosted threats at the time of testing. Malware (both malicious files and web exploits) is out there on the internet, and the solutions being tested can deploy their whole arsenals of built-in security technologies to detect the threats. Detection and blocking of a threat with subsequent rollback of all changes can occur at any stage: when opening a dangerous link, when downloading and saving a malicious file, or when the malware is already running. In any of these cases, the solution is marked a success.

As before, both the number of missed threats and also the number of false positives are taken into account for the final score. Advanced+ is awarded to products that minimize both these metrics.

Advanced Threat Protection Test

AV-Comparatives award received by Kaspersky in the 2023 Advanced Threat Protection Test

This test assesses the ability of the solution to withstand targeted attacks. To this end, AV-Comparatives designs and launches 15 attacks to simulate real-world ones, using diverse tools, tactics and techniques, with various initial conditions and along different vectors.

A test for false positives is also carried out. This checks whether the solution blocks any potentially risky, but not necessarily dangerous, activity (such as opening email attachments), which increases the level of protection at the expense of user convenience and productivity.

Performance Test

AV-Comparatives awards received by Kaspersky in 2023 interim Performance tests

Another critical aspect of a security solution’s evaluation is its impact on system performance. Here, the lab engineers emulate a number of typical user scenarios to evaluate how the solution under test affects their run time. The list of scenarios includes:

  • Copying and recopying files
  • Archiving and unpacking files
  • Installing and uninstalling programs
  • Starting and restarting programs
  • Downloading files from the internet
  • Web browsing

Additionally, system-performance drops are measured against the PCMark 10 benchmark.

Based on these measurements, AV-Comparatives calculates the total impact of each solution on system performance (the lower this metric, the better), then applies a statistical model to assign a final score to the products: Advanced+, Advanced, Standard, Tested, Not passed. Naturally, Advanced+ means minimal impact on computer performance.

What other AV-Comparatives awards did Kaspersky pick up in 2023?

Besides Kaspersky Standard being named Product of the Year, our products received several other important awards based on AV-Comparatives’ tests in 2023:

  • Real World Protection 2023 Silver
  • Malware Protection 2023 Silver
  • Advanced Threat Protection Consumer 2023 Silver
  • Best Overall Speed 2023 Bronze
  • Lowest False Positives 2023 Bronze
  • Certified Advanced Threat Protection 2023
  • Strategic Leader 2023 for Endpoint Prevention and Response Test 2023
  • Approved Enterprise Business Security 2023

We have a long-standing commitment to using independent research by recognized test labs to impartially assess the quality of our solutions and address identified weaknesses when upgrading our technologies. For 20 years now, the independent test lab AV-Comparatives has been putting our solutions through their paces, confirming time and again our quality of protection and conferring a multitude of awards.

Throughout the whole two decades, we’ve received the highest Product of the Year award seven times; no other vendor of security solutions has had such a number of victories. And if we add to this all the Outstanding Product and Top Rated awards we’ve also received over the years, it turns out that Kaspersky security solutions have received top recognitions from AV-Comparatives’ experts a full 16 times in 20 years!

Besides this, AV-Comparatives has also awarded us:

  • 57 Gold, Silver, and Bronze awards in a variety of specialized tests
  • Two consecutive Strategic Leader awards in 2022 and 2023, for high results in protection against targeted attacks by the Kaspersky EDR Expert solution
  • Confirmation of 100% anti-tampering protection (Anti-Tampering Test 2023)
  • Confirmation of 100% protection against LSASS attacks (LSASS Credential Dumping Test 2022)
  • Confirmation of top-quality Network Array Storage protection (Test of AV solution for Storage)
  • and numerous other awards

Learn more about the awards we’ve received, and check out our performance dynamics in independent tests from year to year by visiting our TOP 3 Metrics page.

]]>
full large medium thumbnail
Kaspersky EDR comes first in SE Labs tests. https://www.kaspersky.com/blog/kedr-selabs-test-2022/45160/ Thu, 18 Aug 2022 11:00:03 +0000 https://www.kaspersky.com/blog/?p=45160 The best way to prove the effectiveness of a security solution is to test it in conditions that are as real-world as possible, using typical tactics and techniques of targeted attacks. Kaspersky regularly participates in such tests and sits pretty at the top of the ratings.

The results of a recent test — Enterprise Advanced Security (EDR): Enterprise 2022 Q2 – DETECTION — were revealed in an SE Labs report. The British company has been putting the security solutions of major vendors through their paces for several years now. In this latest test, our business product Kaspersky Endpoint Detection and Response Expert achieved an absolute 100% score in targeted attack detection and was awarded the highest possible rating – AAA.

This is not SE Labs’ first analysis of our products for protecting corporate infrastructure against sophisticated threats. The company previously ran its Breach Response Test (which we took part in in 2019). In 2021, our product was tested in their Advanced Security Test (EDR). Since then, the testing methodology has been tweaked, and the test itself has been divided into two parts: Detection and Protection. This time, SE Labs studied how effective security solutions are at detecting malicious activity. Besides Kaspersky EDR Expert, four other products took part in the test: Broadcom Symantec, CrowdStrike, BlackBerry, and another, anonymous, solution.

Grading system

The testing was made up of several checks, but to get a feel for the results, it will suffice to look at the Total Accuracy Ratings. This basically shows how well each solution detected attacks at different stages, and whether it pestered the user with false positives. For even greater visual clarity, the participating solutions were assigned an award: from AAA (for products with a high Total Accuracy Rating) to D (for the least effective solutions). As mentioned, our solution got a 100% result and an AAA rating.

The Total Accuracy Ratings consist of scores in two categories:

  • Detection Accuracy: this takes into account the success of detecting each significant stage of an attack.
  • Legitimate Software Rating: the fewer the false positives generated by the product, the higher the score.

There’s one other key indicator: Attacks Detected. This is the percentage of attacks detected by the solution during at least one of the stages, giving the infosec team a chance to respond to the incident.

How we were tested

Ideally, testing should reveal how the solution would behave during a real attack. With that in mind, SE Labs tried to make the test environment as life-like as possible. First, it wasn’t the developers who configured the security solutions for the test, but SE Labs’ own testers, who received instructions from the vendor – as clients’ infosec teams usually do. Second, the tests were carried out across the entire attack chain – from first contact to data theft or some other outcome. Third, the tests were based on the attack methods of four real and active APT groups:

  • Wizard Spider, which targets corporations, banks and even hospitals. Among its tools is the banking Trojan Trickbot.
  • Sandworm, which primarily targets government agencies and is infamous for its NotPetya malware, which masqueraded as ransomware, but in fact destroyed victims’ data beyond recovery.
  • Lazarus, which became widely known after the large-scale attack on Sony Pictures in November 2014. Having previously focused on the banking sector, the group has recently set its sights on crypto-exchanges.
  • Operation Wocao, which targets government agencies, service providers, energy and tech companies, and the healthcare sector.

Threat detection tests

In the Detection Accuracy test, SE Labs studied how effectively security solutions detect threats. This involved carrying out 17 complex attacks based on four real-world attacks by Wizard Spider, Sandworm, Lazarus Group, and Operation Wocao actors, in which four significant stages were highlighted, each of which consisted of one or more interconnected steps:

The test logic does not require the solution to detect all events at any particular stage of the attack; it is enough to identify at least one of them. For example, if the product failed to notice how the payload got onto the device, but detected an attempt to run it, it successfully passed the first stage.

Delivery/Execution. This stage tested the solution’s capacity to detect an attack in its infancy: at the time of delivery — for example, of a phishing e-mail or malicious link — and execution of the dangerous code. In real conditions, the attack is usually stopped there, since the security solution simply doesn’t allow the malware to go any further. But for the purposes of the test, the attack chain was continued to see how the solution would cope with the next stages.

Action. Here, the researchers studied the solution’s behavior when attackers have already gained access to the endpoint. It was required to detect an illegitimate action by the software.

Privilege Escalation/Action. In a successful attack, the intruder attempts to gain more privileges in the system and cause even more damage. If the security solution monitors such events or the privilege escalation process itself, it’s awarded extra points.

Lateral Movement/Action. Having penetrated the endpoint, the attacker can try to infect other devices on the corporate network. This is known as lateral movement. The testers checked whether the security solutions detected attempts at such movement or any actions made possible as a consequence of it.

Kaspersky EDR Expert scored 100% in this segment; that is, not a single stage of any attack went unnoticed.

Legitimate Software Ratings

Good protection has to not only reliably repel threats, but also not prevent the user from using safe services. For this, the researchers introduced a separate score: the higher it was, the less often the solution mistakenly flagged legitimate websites or programs – especially popular ones – as dangerous.

Once again, Kaspersky EDR Expert got 100%.

Test results

Based on all the test results, Kaspersky Endpoint Detection and Response Expert was awarded the highest available rating: AAA. Three other products earned the same rating: Broadcom Symantec Endpoint Security and Cloud Workload Protection, CrowdStrike Falcon, and the anonymous solution. However, only we and Broadcom Symantec achieved a 100% score in the Total Accuracy Ratings.

]]>
full large medium thumbnail
Impressive results on the anti-APT front | Kaspersky official blog https://www.kaspersky.com/blog/icsa-q2-certification/17891/ Thu, 03 Aug 2017 09:22:10 +0000 https://www.kaspersky.com/blog/?p=17891 As we have mentioned before, we consider independent tests not as an indicator of our solutions’ effectiveness, but more as a tool to improve our technologies. Therefore, we rarely publish stories about test success, despite our products’ consistently high performance. However, the Advanced Threat Defense certification, conducted by ICSA Labs test lab is worth highlighting.

Advanced Threat Defense certification, conducted by ICSA Labs test lab is worth highlighting

Our Kaspersky Anti-Targeted Attack platform participated in this certification for three consecutive quarters and showed an excellent result in the latest — 100% threat detection and 0 false positives. Why is that important for corporate clients, and what is behind these impressive figures?

Certification

According to ICSA Labs, the purpose of this certification is to determine how effective different protective solutions are against the latest cyberthreats. By the “latest,” ICSA means threats that are not detected by majority of traditional solutions. When choosing test scenarios, they relied on the Verizon Data Breach Investigation Report. Therefore, firstly, their test kit consists of the most trending threats, and secondly, every quarter the selection changes following the changes of the threat landscape.

ICSA Labs Advanced Threat Defense Certified

This allows ICSA to analyze the dynamics of the solution’s performance. Strictly speaking, a good result in one test is not an indicator, but if a product shows good results, despite regular changes in threat patterns, it is a clear sign of effectiveness.

At the same time, the Verizon report contains data on cyberincidents that occurred in enterprise-class companies. Therefore, these are not just the most common and relevant attack vectors — these are threats used by cybercriminals against large businesses.

Latest results

The most recent study was conducted in the second quarter of this year, and its results were published in July. For each of the participants, ICSA Labs experts created a test infrastructure protected by a specialized solution. Then, within 37 days, they simulated various attacks on this infrastructure. In total, more than 1,100 tests were conducted using almost 600 samples of malware, and all of them were successfully detected by our specialized solution. Also, Kaspersky Anti-Targeted Attack platform scored perfectly with false positives: ICSA Labs experts launched more than 500 clean samples that were meant to look malicious, and our solution did not flag any of them as dangerous.

ICSA Labs does not perform comparative tests, and therefore it does not publish data summary tables. So we made our own tables, based on the open data, which you can find right here.

How did we achieve this?

Our products, and in particular the Next Gen Kaspersky Anti-Targeted Attack platform, use a multilevel approach to threat detection. There are static analysis mechanisms, configurable YARA rules, unique SNORT rules for the IDS engine, certificate checking mechanisms, file and domain reputation checks via the global threat base (KSN), tools for advanced dynamic analysis in an isolated environment (sandbox), and a machine-learning engine — our Targeted Attacks Analyzer. Kaspersky Anti-Targeted Attack’s combination of those tools allows it to identify both known and as-yet-unknown malicious technologies.

The Targeted Attacks Analyzer, in fact, is the central analytical core. Based on machine learning, it allows Kaspersky Anti-Targeted Attack platform not only to compare information coming from different detection levels, but also to successfully detect anomalies in network and workstation behavior. Behavioral analysis can detect deviations that can indicate that an attack is in progress that is not using malicious software. For example, that might be an attack conducted with the use of legitimate software, stolen credentials, or through holes in IT infrastructure.

However, threat detection is not enough. Strictly speaking, if a product blocks everything, then it will also stop 100% of threats — but legitimate programs won’t work either. Therefore, it is important to work without false positives. Our technologies allow us to define safe processes, thanks to the HuMachine intelligence principle. The right balance between the detection level and the number of false positives consists of three elements:

  • Big data (we have a huge database of information on threats that has been collected for more than 20 years and is updated via the Kaspersky Security Network in real time with information from our solutions working on client computers around the world);
  • Advanced machine learning technologies that analyze this data;
  • Expertise of researchers, who, if necessary, correct and direct the machine-learning engine.

So we can say that the results of the ICSA certification in many respects is the result of the HuMachine principle.

To learn more about Kaspersky Anti-Targeted Attack platform, visit this web site.

]]>
full
Actual case of indecorous test | Kaspersky official blog https://www.kaspersky.com/blog/actual-case-test/6687/ Mon, 27 Mar 2017 02:18:41 +0000 https://kasperskydaily.com/b2b/?p=6687 Some time ago, we published a post explaining how businesses should navigate the vast sea of benchmark tests and testing organizations. The overarching idea boiled down to the following: For a test to be trustworthy, its methodology and published results must be transparent. That means all participants should be aware of testing conditions and assured there are no mistakes in evaluation. Also, results must be verifiable and reproducible.

Those conditions might seem obvious — not to mention, already satisfied. After all, all testers have available “Methodology” in their credentials or on their websites, where testing scenarios are described in detail, even including the selection of malware used in the test run.

In fact, however, it’s not enough. A result such as “The product detected 15 of 20 new malware samples” has no practical value. What malware was it? Did it bring any real threat? Were the samples different? Did the tester double-check the results? And if so, how did they check?

Ultimately, there’s a lot of room for ambiguity. One example that demonstrates seemingly straightforward results with a lot of room for error is a recent benchmark test, run by NSS Labs, of “advanced endpoint protection” products.

What happened?

We take third-party benchmark testing very seriously. One of the best reasons to support a wide field of testers is that different testers use different security testing methodologies, and we need to know how our products perform in such conditions. Having obtained test results, we need to see what was happening at each stage of the benchmarking process. This enables us to identify bugs in our products (they do sometimes exist, unfortunately) and, possibly, the tester’s mistakes. The latter could be the result of outdated databases, a bad connection with the vendor’s cloud services, a sample malfunction, or misinterpretation of test results.

This is why we ask researchers to enable tracing in the product. Usually, it is done. Moreover, in this case, we were allowed to remotely access the test stand and apply needed settings to the product.

Naturally, as soon as we saw the test results in the case of the NSS Labs benchmark tests, we decided to analyze the logs. When we did, we found that some of the malware samples our product allegedly missed were detected by both static (which do not require the sample to be executed) and behavioral technologies. Moreover, these files were detected by signatures that our databases have contained for quite a long time (some of them were there as of 2014). That seemed odd.

Then we found out we could not study the logs because tracing was disabled. This point is already enough to bring the test results into question. We kept looking and found more.

Which threats were used in the test?

To reproduce an experiment and understand why a solution did not deliver, a vendor needs to see all of the relevant details. To make that possible, testers usually upload malware samples, provide a capture of network traffic patterns, and, if the attack techniques were widespread, explain how it can be repeated with the aid of known malware kits (such as Metasploit, Canvas, and the likes of them).

Well, NSS Labs refused to provide some malware samples, which they called someone’s intellectual property. They did eventually provide the CVE ID of the vulnerability that was exploited when our product was allegedly unable to stop the attack. The vulnerability was not included into known and available kits, and so we were unable to reproduce and subsequently verify the attack.

In no way are we trying to skirt the rules protecting intellectual property. However, if protecting intellectual property might compromise the transparency of the test, that should have been discussed with participating vendors in advance. No one would have been hurt if vendors had studied the technique of the attack under a nondisclosure agreement.

Of course, the more sophisticated and unique attack scenarios used, the more valuable the test. But of what value are the results of an attack that is unverifiable and unreproducible? Our industry’s ultimate goal is protecting end users. So, of course you can find imperfections in a product and cite them in the results, but you should also enable companies to better protect users.

In a nutshell, we were not able to obtain immediate proof of the majority of flaws in our product.

What files were used in the benchmark tests?

As a rule, in a benchmark test, security products are challenged to respond to malicious files. If a product detects and blocks them, it gets a point. But there are also tests in which a product has to analyze a legitimate file to test the former for false positives. If a product lets a clean file run, it gets another point. That seems straightforward.

But the NSS Labs test files included the original PsExec utility of the PCTools (SysInternals) pack by Mark Rusinovich. Although the utility had Microsoft’s valid digital signature, for purposes of testing, NSS Labs decided the products should deem it malicious. But many system administrators use PsExec to automate their work. Following the test’s logic, we would also judge malicious cmd.exe and a whole range of similar, legitimate programs.

Of course, it’s undeniable that this tool could be used with malicious intent. But for this test scenario to be appropriate, an entire attack killchain would have to be experimentally verified. A successful attack would constitute a failure of the product, of course. Expecting a security solution to detect the “malicious” file as such detracts from the utility of the test results.

Also, vendors and testers have differing views on “potentially malicious” software. We think it should not be seen as straight-out malicious — for some users, they are legitimate tools. Others consider them potentially risky. Using such samples in benchmarks is at best inappropriate.

Which version of product was tested?

While we were studying the results, we found out that in the majority of testing scenarios, our products’ databases had not been updated for more than a month — a fact somehow omitted in the final report. The researchers said it was OK: Some users won’t have updated their installed products, after all.

No competitive security solution relies solely on a database these days, and so naturally, benchmark tests also include heuristics and behavioral analysis. But, according to the testing methodology in this case, the purpose was to test straightforward blocking and detection — not heuristics only.

In any case, all test participants should be on the same page in terms of updates. Otherwise, how could the test be comparative?

The interaction

To eliminate potential testing mistakes, it’s essential for all parties to have access to the most detailed information. In this respect, the interaction with the researchers was a bit weird, and it challenged our attempts to understand both the process of the test and the resulting documentation. The timing also created some issues, with shifting deadlines in the run up to RSA making thorough analysis, Q&A, and reproduction impossible in the conference timeline. Finally, the resource for vendors to obtain samples was problematic — some files were missing, other files were continuously replaced, and some files did not match the table of results.

Considering all of the above, can we agree the results were not fair and transparent?

We are now negotiating with the testing lab, and some of our claims have already been accepted by the tester. However, we have yet to reach consensus on many other issues. For example, the tester cited one malware sample that detected a working security product in the system and aborted the malicious behavior. Because it did not exhibit any malicious behavior, it was not detected, and the tester called that a failure of the product. However, a user would be protected from a threat thanks to the security product. We call that success.

Summing up all of the above, we have come up with a list of suggested requirements for benchmark tests. We think tester compliance is critical to ensure research transparency and impartiality. Here are the requirements:

  • A tester must present logs, a capture of traffic patterns, and proof of product success or failure;
  • A tester must provide files or reproducible test cases, at least for those the product allegedly failed to detect;
  • Solid proof is required that the attack simulated during the test and undetected by a product, indeed inflicted harm on the test system. Potentially malicious software should be considered a part of a particular test case, with nondetection considered a failure only upon the proof the sample inflicts real damage;
  • Clean files may and should be used in a test to check for false positives, but they cannot be treated as threats based on potential for misuse. (Also, a modified clean sample should not be considered a threat — whatever modifications are used, the files continue to be “clean”);
  • The materials serving as proof of results must be provided to participants on a fair and equal basis.
]]>
full large medium thumbnail
New test methodology: Something went wrong | Kaspersky official blog https://www.kaspersky.com/blog/new-test-methodology-something-went-wrong/15168/ Wed, 22 Feb 2017 05:04:17 +0000 https://kasperskydaily.com/b2b/?p=6563 As we’ve written about before, we at Kaspersky Lab eagerly support a variety to benchmarking methodologies for security products. Ultimately, not only do benchmark tests show which solutions are the most effective, but they also serve to fuel constant product innovation and improvement. That is especially true of holistic research with testing scenarios reflecting real-life user behavior.

That’s why we were thrilled to learn that NSS Labs devised a new methodology for testing enterprise-class security products, which help to successfully detect, prevent, and continuously monitor threats at the endpoint level. In NSS Labs terminology, such solutions are called Advanced Endpoint Protection (AEP) solutions.

In mid-February, NSS Labs introduced the results of its pilot test using the new methodology. The products chosen for the assessment were 13 solutions classified as AEP, including the latest version of Kaspersky Endpoint Security for Business, which performed quite well. However, the report we got raised serious questions. The key reason: When we reproduced the tests, we got very different results. In fact, our test showed that Kaspersky Endpoint Security for Business underperformed twice, with the rest of the test scenarios yielding good results. What’s also unfortunate, we did not have enough time to discuss the performance with NSS Labs analysts (a common practice) before the report came out. After all, tests can fail, too. All in all, such a hastily prepared report could be explained by a rush to publish the results in time for the RSA conference.

We are now discussing the methodology with NSS Labs and together trying to figure out what could have gone wrong. Currently, we think that the new testing method requires some additional work. However, we are assured the problems will be solved, and then we will be able to back up our advancements with reports prepared by one more authoritative research body.

In general, we stick to what we have already said in one of our previous posts: Each research report should be taken with a sprinkle of healthy skepticism, with decision-makers relying on the results of as much independent tests as possible.

 

]]>
full large medium thumbnail
Independent benchmark testing: Evaluating the evaluators | Kaspersky official blog https://www.kaspersky.com/blog/independent-testing/6524/ Fri, 10 Feb 2017 21:39:56 +0000 https://kasperskydaily.com/b2b/?p=6524 How can you know which enterprise solution to trust with your company’s security? You certainly can’t go by the vendors’ marketing materials, which are hardly objective. Tips and recommendations from industry peers may sound like a great resource, but in reality, they can be even less useful because every company has unique cybersecurity requirements, IT environments, and other needs. And, strictly speaking, not every advisor can realistically evaluate the state of cybersecurity in his or her own company.

That’s why it’s crucial to have access to objective information from independent industry specialists who base their evaluations on measurable parameters. By evaluating products under the same conditions, independent testers can accurately identify winners in each discipline. With their reports, a customer has the opportunity to choose products based on objective performance results.

That said, benchmark tests are not performed solely to help users make purchasing decisions. Vendors need them as well, to see whether they are on a par with the rest of the market, whether their solutions can operate effectively in the modern threat landscape, and whether they are moving in the right general direction with their products.

Gaining the top position in an independent tester’s overall rating is solid proof of a product’s effectiveness. That’s why market players willingly participate in unbiased tests of their solutions and help independent labs improve their methodologies. If a vendor aggressively markets its product’s technical excellence while failing to file for independent benchmark tests, think twice about whether the product is likely to be as good as advertised.

The tests

Sometimes, market players request special selective benchmark tests: to evaluate a new technology, to check product performance in specific circumstances, and so forth. In these cases, vendors pay all costs for test-related work and thus are entitled to choose participants or adapt the methodology. Sometimes, a vendor asks for a “duel” with a certain competitor.

Theoretically, the vendor that makes the request can require special conditions that give its product certain advantages. Or the testing case can be totally artificial and irrelevant to the real threat landscape. In those cases, the test results are not trustworthy. In general, look for a majority of market players to approve of a test’s methodology.

Sometimes testers themselves see a market’s interest in a specific technology and decide to perform selective benchmarks. For example, a tester might evaluate the quality of cloud protection, exploit protection, antiransomware tools, or protection from banking threats.

Regular comparative benchmarks occur on a schedule — annually, semiannually, or bimonthly, for example. The benchmark tests evaluate and rank competing products on the basis of various tasks. The top performers get an award.

Sometimes, such tests take place over the course of a whole year, and the lab may put out an intermediate report every so often. Those tests are called continuous comparative benchmarks. For such evaluations, a vendor has to provide its products for testing on an ongoing basis, without missing an update. This ensures each solution is evaluated under dynamic conditions and more thoroughly. It’s much harder to remain the leader over time than to become a one-time winner. Continuous benchmark tests provide a holistic view of the industry. They also shine a light on vendors that subject themselves to only one test per year (you cannot get an adequate picture of a product based on only one examination).

Some researchers run certification tests, which evaluate a single product against a limited scope of parameters. The point of this methodology is to determine if the product meets certain requirements (usually minimal). They help separate real products from fake (unfortunately, fakes are out there). This kind of testing does not provide a clear view of why a product succeeded; it’s useful for understanding a product’s merits, but you should not rely on certification alone to make buying decisions.

Methodologies

Every test laboratory uses its own methodology. In most cases, it is a result of an evolution of the various methods. Every laboratory collects independent sets of test cases. For this reason, you should look at the results of various tests carried out by different companies to get a comprehensive picture of the product’s effectiveness.

The first antivirus benchmark tests were based on primitive checks. A lab would collect a selection of viruses and scan it with each of the available products. This procedure was called on-demand scanning (ODS). A variation of such tests relied on on-access scanning, which analyzed files in the process of copying them. However, both threats and security solutions evolved quite fast. Although such benchmarks are still in use, they are worth little on their own.

Further development of the methodology involved more testing of behavioral analysis technologies. For this purpose, malware samples were executed on the machines. This kind of testing complicated the process, increasing test duration.

With malware becoming more and more sophisticated, those relatively primitive methods became increasingly irrelevant. For example, some malware functions exclusively inside a specific environment (operating system, system language, browser, installed applications, even country). Moreover, the most cunning malware samples thwart analysis by recognizing and not running in an isolated environment.

Testing methodology thus required further improvement. Enter real-world (RW) benchmark tests. The test machines and conditions closely mirror real-world specs and common user behavior. The method offers more precise results, but it’s complex, cumbersome, and expensive. That’s why only a limited number of labs run RW benchmark tests.

Sometimes testers certify based only on behavioral, or proactive, tests. During those tests, products scan threats that are guaranteed unknown to them; they must detect threats based only on behavioral analysis. The testers install the system on a disconnected PC, leave it for several months, and then feed it newly discovered samples. Sometimes, they even engineer or modify malicious code to emulate a brand-new threat. However, with cloud technologies spreading fast, that sort of approach is becoming obsolete.

Finally, a mature benchmarking methodology includes two more types of tests. Even if a solution proves effective at detecting and neutralizing malicious code, it’s totally impractical if it gobbles computing resources, and therefore, performance tests are part of the standard battery. The false positive (FP) test is even more important: A good solution should not flag a legitimate application as malicious.

How to use benchmark tests

Any organization that tests cybersecurity products should make its methodology transparent to vendors and consumers alike. How can you trust the test results otherwise?

Here are four key reasons to be skeptical about claims of the company’s products effectiveness:

  • The vendor uses benchmark tests that do not employ a transparent methodology;
  • The vendor participated in only one test, avoiding all other tests in the series;
  • The vendor avoids providing its testing product to independent testing experts;
  • The vendor participates only in tests with methodology built around artificial cases that don’t reflect real-world use.

Always evaluate test results over time to get a balanced view, and don’t stick to a single benchmarking methodology. A product’s benchmark tests should be handled by different labs for a comprehensive picture of its strengths and abilities.

Note the operating system under which tests were carried out, as well. For example, one solution might be more effective on Windows 10 than with earlier versions, or vice versa.

Keep an eye on how different products by the same vendor perform. If the overall picture isn’t great, then one winning product could be a fluke.

Kaspersky Lab is constantly in touch with leading independent labs, and provides a variety of products for benchmark testing. Test results are public and can be found here.

]]>
full large medium thumbnail
Kaspersky Internet Security works without false positives | Kaspersky official blog https://www.kaspersky.com/blog/zero-false-positives/12357/ Tue, 14 Jun 2016 13:00:43 +0000 https://www.kaspersky.com/blog/?p=12357 Picture this irritating scenario: You are installing an update for Notepad++, Yahoo Messenger, or WinRAR and your antivirus pipes up to warn you the software is malware. You know that these are not exotic applications. They’re quite commonly used software, and every security solution ought to know about them. Is something wrong with your antivirus? Has the developer’s site been hacked?

If you download software from official websites only, avoiding torrent trackers and shady forums, there is probably nothing wrong with your apps — you just have a classic false positive. These warnings should not occur; they do nothing but confuse users. In the worst case scenario they might lead to people ignoring or even disabling their antivirus solutions. Every day Internet is topped up with new legitimate apps and important updates that should not be considered dangerous by various security products.

Experts from the independent IT-security laboratory AV-Test spent 14 months — from January 2015 through February 2016 — testing how well 33 security solutions avoided false positives. The laboratory examined 19 consumer antivirus products and 14 solutions for corporate users.

AV-Test checked how well the security solutions blocked or warned users about legitimate websites, software, and certain actions carried out while installing and using legitimate apps.

To obtain relevant and reliable test results, AV-Test fed antiviruses a batch of 7.7 million files that included the latest versions of popular programs such as Microsoft Windows 7, 8, and 10 and Microsoft Office. In addition, the laboratory put to the test 7,000 websites and launched 280 applications — twice.

In general, all of the antivirus programs performed quite well. Yet the vast majority failed certain tasks — but not Kaspersky Internet Security, which made it through all of the tests without triggering a single false alarm, leaving Intel Security (McAfee), Bitdefender, AVG, and Microsoft in the dust. As for enterprise software, only Kaspersky Endpoint Security and Kaspersky Small Office Security aced all of the tests error free.

This is not the first time Kaspersky Lab has received AV-Test’s recognition. Earlier this year the laboratory called our solutions “the most efficient and reliable system watchdogs”: easy to use, with minimal system load and very strong protection.

We regularly send our solutions for independent analysis to make sure Kaspersky Lab is continuing in the right direction and doing everything we can to keep our clients safe. Having installed Kaspersky Internet Security, you can explore the Internet confident that your security solution will not waste your time with false flags.

]]>
full large medium thumbnail
Top of Top3 | Kaspersky official blog https://www.kaspersky.com/blog/top3/5201/ Wed, 17 Feb 2016 12:19:38 +0000 https://kasperskydaily.com/b2b/?p=5201 Each year products by Kaspersky Lab, along with other vendors are tested in a number of independent benchmarks and comparative reviews. We collect the statistics from each of these and each year create a diagram which serves as a visual representation of the pool of companies participating in tests, likely winners and TOP3 residents. Our products have demonstrated the best results for the third consecutive year, achieving a higher percentage of top-three places and received awards than any other vendor: 82%.

main

Why we need tests

In fact, independent tests are designed for end users. The developers of security products always strive to place their solutions in the spotlight and position them as the ‘best’ on the market, but if one trusts completely to the marketing then all of the products are ‘the best-in-class’. Tests, in this respect, offer a fairly accurate picture which is not influenced by developers and ultimately helps user navigate through the deep waters of marketing jargon and slogans.

Moreover, the tests are based on different approaches and help to evaluate various aspects of a security solution. For some, the most important capability would be the lowest number of false positives, others care more if a product performs effectively in real-world tests, regardless of false positives; and other consumers or businesses might appreciate minimal impact on their PC’s performance. This information is available to a user through different benchmarking tests.

As for developers, benchmarks are not solely a means of showing off. We see them as an integral part of the product development process. Regular and comprehensive independent reviews provide out team with an extra pair of eyes which helps to spot drawbacks in our products — generally it’s better off with a situation when they are spotted by researchers and not by competitors.

How are benchmarks run?

There are a number of major independent dotted around the world. They have years of experience and are constantly updating their testing methodologies.

Some security companies try to render such tests pointless, claiming that the products are tested in ‘lab conditions’ as opposed to real-life testing. Whilst some point out that the participating vendors tweak their products to better perform in a particular benchmark, in reality it’s been a long time since testers relied exclusively on signature-based methods. Nowadays hardly anyone limits testing to such methods: researchers are interested in seeing how products perform in real life scenarios, so aside from feeding contestants a collection of malware samples, they employ a battery of sophisticated tests to see how the companies manage complex threats.

For example: AV-Test use a selection of 0-day threats in each test; MRG-Effitas, apart from using up-to-date financial threats, relies on a number of considerably sophisticated methods, like API Hooking tests; AV-Comparatives handles separate researches, like “Whole Product Dynamic “Real-World” Protection Test“. The rest of the testers employ similar methods that are able to accurately mimic real-life conditions. Sometimes benchmarks include recently discovered exploits, which cannot be detected with signature-based analysis at all.

Are independent tests truly that independent?

Testers are not out to favor any of the contestants specifically because testing companies treasure their independent position. The assets they have are their reputation and expertise, which they put to considerable use during testing.

In order to participate in the benchmark, all vendors have to pay a small provision. That said, a stance like ‘We don’t pay to participate in the testing’ is not a good enough excuse not to participate.

Of course, there are tests commissioned by certain vendors. Usually, such benchmarks are meant to compare a certain product against the competition that are not participating in public tests, or to measure the efficiency of security products against specific threats. In this case, it’s only the commissioner who pays, yet this approach does not equate to coercion either. The tester, in any case uses established testing methodologies, and the commissioner cannot influence the results.

What is TOP3 meant for?

The underlying idea of TOP3 remains the same: it is a comprehensive assessment of different vendors’ results in various tests over an extended period of time. The methodology also remained the same. You can find more details are available here.

It is understood that winning one test could be down to favorable conditions. A vendor who participated in only one test and showed good results, would then have a 100% ‘wins’ over ‘number of tests’ ratio. However, this isn’t comparable to real-world conditions. To be a more well rounded solution, a security vendor must offer its products to as many tests as possible.

 

]]>
full large medium thumbnail
Kaspersky Lab business products awarded by independent testing labs | Kaspersky official blog https://www.kaspersky.com/blog/kaspersky-lab-business-products-awarded-by-independent-testing-labs/4161/ Mon, 06 Jul 2015 16:29:51 +0000 http://kasperskydaily.com/b2b/?p=4161 Kaspersky Lab’s products received commendations from a number of respected independent testing labs, such as AV-Test, AV-Comparatives, and a few others earlier this year. Their tests included assessment of new malware detection quality, the reliability of protection against phishing and financial threats, the impact on the device’s performance, and many others aspects equally important for both end-user and business-oriented cybersecurity products.

In this post we will focus on the awards pertaining to our business-oriented products received in Q1.

AV-Comparatives

In Q1 of 2015, Austria’s AV-Comparatives published its annual review for 2014. This included the results of two Real-World Protection tests, two File Detection tests, two Performance tests, one Proactive (Retrospective) test and the 8-month Malware Removal test. According to the review, Kaspersky Lab was one of two companies that collected the most Advanced+ certificates and, as a result, earned a Top Rated certificate.

In addition, the company received gold certificates for the File Detection and Best Overall Speed tests, as well as silver certificates for Real-World Protection, Proactive Malware Detection, and Malware Removal.

AV-Comparatives also added Performance Test, released in May 2015, as well as Heuristic/Bahvioural Test and File Detection Test, both published in March 2015.

In all three, Kaspersky Lab’s products were awarded Advanced+ awards.

Given that all Kaspersky Lab’s products share the unified codebase, aforementioned tests results are well applicable to our business products.

AV-Test

In the first quarter of 2015, the German AV-Test research institute published the results of its latest Bi-Monthly Certification tests of Internet Security class solutions for corporate and home users (for November-December 2014 and January-February 2015). Kaspersky Internet Security received two AV-Test Certified and Kaspersky Endpoint Security – two AV-Test “Approved” certificates for their performance.

We already see results of tests for Kaspersky Endpoint Security 10.2 and Kaspersky Small Office Security 4 published in April (i.e. in Q2). Both received “Approved” certificates as well.

MRG Effitas

MRG Effitas announced the results of two global tests – Online Banking / Browser Security Certification and the MRG Effitas 360 Assessment & Certification Programme. Both tests were carried out in Q4 of 2014.

The first test, which focused solely on protection against financial threats, included three trials to evaluate the effectiveness of current malware detection, protection against the leak of payment data on a computer that is part of a botnet, and combating attempts to inject malicious code in the browser processes.

While it was Kaspersky Internet Security that was tested in both cases, for protection of online transactions it uses the very same technology as the other Kaspersky Lab products, namely Kaspersky Small Office Security. It is Safe Money technology, shipped also as a standalone product.

Virus Bulletin

In Q1 2015 Virus Bulletin released two sets of results: December’s VB100 certification of products designed to protect Windows 7 SP1 64-bit and January’s VBSpam comparative review. Two Kaspersky Lab products – Kaspersky Internet Security and Kaspersky Small Office Security – participated in the first testing. Both detected 100% of malicious samples without producing a single false positive and were awarded VB100 certificates.

Kaspersky Security for Linux Mail Servers participated in the certification of anti-spam solutions. The product blocked 99.9% of spam messages and received yet another VBSpam+ certificate. The experts highlighted the fact that Kaspersky Security for Linux Mail Servers was one of just three solutions that did not generate a single false positive.

The second quarter of 2015 is already wrapping up so we’re soon to see and hear more results from independent tests. But this year’s start looks encouraging, as Kaspersky Lab reiterates its status as the leader in the cyber protection area.

]]>
full large medium thumbnail