Alexander Moiseev – Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Mon, 28 Jun 2021 16:15:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Alexander Moiseev – Kaspersky official blog https://www.kaspersky.com/blog 32 32 Ontologies and their use in cybersecurity | Kaspersky official blog https://www.kaspersky.com/blog/cybersecurity-ontology/40404/ Mon, 28 Jun 2021 16:15:09 +0000 https://www.kaspersky.com/blog/?p=40404 Here at Kaspersky, we regularly analyze new technologies and look for ways to put them to use in cybersecurity. Ontology may not represent a very popular approach right now, but it can speed up and simplify a lot of processes. I believe it’s only a matter of time before using ontology for cybersecurity catches on.

In information systems, what’s an ontology?

In information science, an ontology is a systematic description of all of the terms in a specific subject area, their characteristics or attributes, and their relationships. For example, the Marvel Comics Universe ontology includes the names and attributes (superpowers, weapons, weaknesses) of all of the superheroes, their power levels, and so forth. An ontology can describe anything from wines to electrical grids.

Using a language such as OWL, Web Ontology Language, you can develop tools to analyze ontologies and identify hidden connections and missing or obscure details. For example, analyzing the ontology of the Marvel universe can help determine the best team of superheroes and the most expedient way to defeat a villain.

For that, as well as for similar tasks, we could use the Protégé platform, for example. Developed at Stanford University, the software’s purpose is to analyze biomedical data, but now it’s a free, open-source ontology editor and framework for building intelligent systems to manage knowledge from any field.

Ontologies vs. machine learning

The tools for working with ontologies have a lot in common with machine-learning algorithms, but with one key difference: Machine-learning models predict; ontological tools deduce.

Machine-learning models analyze large arrays of data and use them to make predictions about new objects. For example, a machine-learning model might look at 100 malicious e-mails and highlight the specific characteristics they share. Then, if the model recognizes some of those characteristics in a new e-mail, it can determine that the new message is also malicious.

An ontology also figures in data analysis, but instead of leading to predictions, it points to information that logically ensues from supplied parameters. It doesn’t learn or draw on previous experiences to analyze information. For example, if we indicate in the ontology that e-mail A is a phishing e-mail and that all phishing e-mails are malicious, and then state that e-mail B is a phishing e-mail, the ontology will conclude that e-mail B is malicious. If we set out to analyze e-mail C but don’t supply any characteristics, the ontology will not make any conclusion.

Ontologies and machine learning can complement each other. For example, ontologies can optimize and accelerate machine-learning models. They make the process of training models much easier by simulating logical reasoning and by being able to automatically classify and link information. And using time-saving ontological axioms — rules that describe the relationship between concepts — can narrow the input array for the machine-learning model, speeding its ability to find an answer.

Other uses for ontologies in cybersecurity

Ontologies can also help identify hidden opportunities or weak areas. For example, we can analyze a company infrastructure’s level of protection against a specific cyberthreat, such as ransomware. To do so, we create an ontology of potential antiransomware measures and apply it to the list of existing security measures in the organization.

Using the ontology will tell you whether the infrastructure has enough protection or needs work. You can use the same method to determine whether an IT security system meets IEC, NIST, or other standards. This can also be done manually, but it would take much longer and be more expensive.

Ontologies also make the lives of IT security specialists easier by enabling them to communicate with each other in the same language. Using ontology can improve cybersecurity by helping specialists contextualize the problems and attacks that others encounter, leading them to better security measures. That kind of information also comes in handy when experts create information security architectures from scratch by offering a systematic view of vulnerabilities, attacks, and their connections.

The very concept may seem complicated and abstract, but you encounter ontologies almost every day. Consider Internet searches, for example. Ontologies underlie semantic searches, letting you search for answers to actual queries rather than getting bogged down in the meaning of each individual word in them. That greatly increases the quality of search results. Pinterest, an image-sharing social network, uses similar technologies, relying on ontologies to analyze users’ actions and reactions, and then employing that data to optimize recommendations and targeted advertising.

The above represents just a few ideas of how using ontologies can improve many aspects of business and cybertech. Here at Kaspersky, we’re interested in ontology’s prospects not only for cybersecurity, but also in terms of the bigger picture, where ontology presents huge opportunities for business.

]]>
full large medium thumbnail
How to fool Tesla and Mobileye autopilots | Kaspersky official blog https://www.kaspersky.com/blog/rsa2021-tesla-mobileye-perception-gap/40064/ Wed, 26 May 2021 10:45:21 +0000 https://www.kaspersky.com/blog/?p=40064 It’s a common movie plot device, the main character thinking they saw someone step onto the road, so they swerve and end up in a ditch. Now imagine it’s real — sort of — and instead of a trick of the light or the mind, that image comes from a cybercriminal projecting, for a split second, something the car autopilot is programmed to respond to. Researchers from Georgia Tech and Ben-Gurion University of the Negev demonstrated that sort of “phantom attack” threat at RSA Conference 2021.

The idea of showing dangerous images to AI systems is not new. Techniques usually involve using modified images to force the AI to draw an unexpected conclusion. All machine-learning algorithms have this Achilles heel; knowing which attributes are key to image recognition — that is, knowing a bit about the algorithm — makes it possible to modify images so as to hinder the machine’s decision-making process or even force it to make a mistake.

The novelty of the approach demonstrated at RSA Conference 2021 is that the autopilot was shown unmodified images — an attacker need not know how the algorithm works or what attributes it uses. The images were briefly projected onto the road and nearby stationary objects, with the following consequences:

In a variation on the theme, the images appeared for a fraction of a second in a commercial on a billboard by the side of the road, with essentially the same outcome:

Thus, the authors of the study concluded, cybercriminals can cause havoc from a safe distance, with no danger of leaving evidence at the scene of the crime. All they need to know is how long they have to project the image to fool the AI (self-driving cars have a trigger threshold to reduce their likelihood of producing false positives from, for example, dirt or debris on the camera lens or lidar).

Now, a car’s braking distance is measured in dozens of feet, so adding a few feet to allow for better situation assessment wasn’t a big deal for AI developers.

Reaction time of Tesla and Mobileye recognition systems to a phantom image

Length of time required to show a phantom image to Tesla and Mobileye recognition systems. Source

However, the figure of a couple of meters applies to the Mobileye artificial vision system and a speed of 60 km/h (about 37 mph). In that case, response time is about 125 milliseconds. Tesla’s autopilot response threshold, as experimentally determined by the researchers, is almost three times as long, at 400 milliseconds. At the same speed, that would add almost 7 meters (about 22 feet). Either way, it’s still a fraction of a second. Consequently, the researchers believe such an attack could come out of the blue — before you know it, you’re in a ditch and the image-projecting drone is gone.

One quirk in the system inspires hope that autopilots will ultimately be able to repel this type of attack: Images projected onto surfaces that are unsuitable for displaying pictures are very different from reality. Perspective distortion, uneven edges, unnatural colors, extreme contrast, and other oddities make phantom images very easy for the human eye to distinguish from real objects.

As such, autopilot vulnerability to phantom attacks is a consequence of the perception gap between AI and the human brain. To overcome the gap, the authors of the study propose fitting car autopilot systems with additional checks for consistency in features such as perspective, edge smoothness, color, contrast, and brightness, and ensuring results are consistent before making any decision. Like a human jury, neural networks will deliberate on the parameters that help distinguish real camera or lidar signals from a fleeting phantom.

Doing so would, of course, add to systems’ computational load and effectively lead to the parallel operation of several neural networks at once, all necessarily trained (a long and energy-intensive process). And cars, already small clusters of computers on wheels, will have to turn into small clusters of supercomputers on wheels.

As AI accelerators become widespread, cars may be able to carry several neural networks, working in parallel and not draining power, on board. But that’s a story for another day.

]]>
full large medium thumbnail
Kaspersky buys part of Nexway | Kaspersky official blog https://www.kaspersky.com/blog/kaspersky-nexway/38300/ Wed, 30 Dec 2020 12:06:45 +0000 https://www.kaspersky.com/blog/?p=38300 This head-spinning year is finally drawing to a close. Sure, 2020 will be remembered for all sorts of unforeseen events and difficulties, but it’s presented some cool opportunities as well! As for us, we have been busy expanding our operations — not only in the traditional computer cybersecurity segment, but also in related areas such as antidrone and e-commerce systems.

If you think the latter is not for us, I beg to differ. For several years now, we have been looking at ways to protect the data of our users and clients not only on their computers and inside our systems, but also in terms of where our technology partners use that data. I’m talking primarily about online platforms for purchasing subscriptions to our products.

Of course, each and every partner complies with all payment security standards, privacy laws such as the GDPR, and so forth. But simply following the regulations is not enough for us, as our Global Transparency Initiative demonstrates. We want to set new, higher standards. We want to increase the transparency of the IT business as a whole, and our responsibility for what is entrusted to us. That’s why we invested in Nexway, one of our key partners in the e-commerce market, with a view to building the safest, most ethical, and most open online trading ecosystem possible, optimized in particular for players in the field of cybersecurity.

For those hearing the name for the first time, Nexway is a 25-year-old French company that helps businesses in 140 countries sell their products online.

Kaspersky has worked with Nexway for many years, and we find the platform’s strength lies in its adaptability to the legal and fiscal realities of each country of operation. We aim to supplement that flexibility with dynamic services and technologies that enable Nexway’s partners to adapt their businesses rapidly to any changes, even such drastic and quick ones as those this year has brought.

For existing partners, and especially for security product vendors, a note about our restaurant-style, “open kitchen” concept: You feel safer eating food you’ve seen being properly prepared, right?

The same goes for the processing of data and payments. Through certified processes and regular audits, Nexway will be able to demonstrate that it stores and handles all client and subscriber data securely and in full compliance with the law and with partners’ internal policies, no matter how stringent, and that no one (including Nexway itself) uses that data.

Put simply, your subscribers are yours only, unless otherwise agreed with the said subscribers. The product of this relationship of trust will be a marketplace optimized for selling privacy and security products — one that buyers and subscribers alike can trust.

I want to emphasize that our role is to assist Nexway in bringing that vision to life, not to bring about major organizational changes. Nexway will remain a separate company with its own management, processes, and reporting, in accordance with all European laws and regulations. We will limit our input to technological expertise and strategic direction. We certainly hope Nexway’s partners will continue to collaborate with the company, and that the enhanced level of transparency and openness will attract new ones. Finally, for the record, we have no plans whatsoever to end our partnerships with other e-commerce platforms.

The road ahead is full of opportunities and challenges, but as the Global Transparency Initiative shows, meeting them is very worthwhile! The trust of partners and clients is the most valuable currency there is, which is why we are confident of success.

]]>
full large medium thumbnail
The future of cybersecurity | Kaspersky official blog https://www.kaspersky.com/blog/start-immunizing/27813/ Thu, 01 Aug 2019 14:30:21 +0000 https://www.kaspersky.com/blog/?p=27813 I’ve been in the cybersecurity industry for more than 15 years. During that time, and together with other infosec veterans, I experienced the rise of the FUD (fear, uncertainty, doubt) hype firsthand. I have to admit, it worked. Neuromarketing science got it right with that one. Fear really did help sell security products. Like any strong medicine, however, FUD had a side effect. Not just one, actually — it had many.

We as an industry cannot escape FUD because we’re addicted to it. For us, FUD manifests itself in some of our customers demanding proof that what we’re telling them about is not just another potential breach but a real danger. Unfortunately, the best proof that a danger is real comes when something bad happens. And that’s why the media got addicted to FUD as well. The more millions of dollars — or euros, or whatever other currency — someone loses, the better the story.

Now, enter the regulators, with their tendency to overreact and to impose strict compliance regulations and fines. That effectively puts security researchers, product developers, marketers, media, and regulators into  a strategic trap that in game theory is called the prisoner’s dilemma: a situation in which all players must use suboptimal strategies because to do otherwise would cause them to lose. In the case of the infosec industry, using that suboptimal strategy means generating even more FUD.

To break out of this trap, we need to understand one thing: the future cannot be built on the basis of fear.

The future I’m talking about is not distant, it’s already here. Robots are already driving trucks and roaming around Mars. They write music and create new recipes for food. This future is far from perfect from many perspectives, including that of cybersecurity, but we’re here to empower it, not to hinder it.

Eugene Kaspersky recently said that he believes “the concept of cybersecurity will soon become obsolete, and cyberimmunity will take its place.” That may sound bizarre, but it has a much deeper meaning that is worth explaining. Let me dive a little bit deeper into the concept of cyberimmunity.


Cyberimmunity is a great term to explain our vision of the safer future. In real life, an organization’s immune system is never perfect, and viruses or other malignant microbiological objects still find ways to fool it, or even to attack the immune system itself. However, immune systems share a very important trait: They learn and adapt. They can be “educated” through vaccination about possible dangers. In times of peril, we can assist them with ready-made antibodies.

In cybersecurity, we used to deal mostly with the latter. When our customers’ IT systems succumbed to infection, we had to be ready with solutions. But that’s when the addiction to FUD started, with security vendors providing ready relief to diseases that hurt badly. That “superpower” feeling proved addictive to infosec vendors. We were like, “Yes, it’s time for hardcore antibiotics, because, trust us, the problem is really that serious.” But using hardcore antibiotics makes sense only when the infection has already clawed its way in — and that, we can all agree, is far from an ideal scenario. In our cybersecurity metaphor, it would’ve been better if the immune system could have stopped that infection before it took hold.

Today, IT systems have become very heterogeneous and cannot be viewed outside of the context of humans — those who operate the devices and those who interact with the devices. The demand for “educating the immune system” has become so great that we actually are seeing a trend toward prioritizing provision of services — over even the product, which used to be primary. (The “product” nowadays is in many cases a customized solution, something that is adapted to the specifics of the IT system it’s designed to fit in.)

Understanding of this vision didn’t come at once. And just like with vaccination, it’s not a one-shot approach, but, rather, a series of vaccination attempts, all aimed at the same goal: stronger cyberimmunity for a safer future.

First, and foremost, a safer future can be built only on a safe foundation. We believe this is possible when all systems are designed from the start with security in mind. Real applications in the telecommunications and automotive industries are already testing our visionary approach. Carmakers being especially keen on safety, our mission statement of “building a safer world” is critical. In the automotive world, security really means safety.

As with biological vaccination, we expect the cyberimmunity concept to be met with skepticism. The very first question I’d expect to hear is: “Can we really trust the vaccine and its vendor?” Trust in cybersecurity is of paramount importance, and we believe that simply giving our word is not enough. If a cybersecurity firm’s clients want to see software’s security and integrity, they have every right to demand it — in the form of source code. We make that available, and all clients need is a pair of attentive eyes and a PC to analyze how things work. We do require a PC in sanitized condition for that code viewing, however, to ensure that observers can’t tamper with the code themselves. And just as you may seek consultations from various doctors, having a trusted third party view the code as well makes sense. With IT solutions, that outside viewer could be representatives of a Big Four auditing firm who can explain what those bits and bytes actually mean for your business.

Another important component is the ability of the immune system to withstand attacks against it. Cybersecurity software is still software, and it can have flaws of its own. The best way to learn these flaws is to expose them — to white-hat hackers, the ones who find flaws and report them back to vendors. The idea of offering a prize for finding a bug in software, first introduced in 1983, was absolutely brilliant, as it greatly reduced the financial incentives for black-hat hackers (who peruse found flaws or sell them to other cybercriminals). However, white hats demand guarantees that the company they investigate won’t turn on them and prosecute them.

Where there’s demand, there’s supply, so recently we’ve seen suggestions for agreements between researchers and companies such that the former can safely try to crack the latter without fear of being accused of any crime, as long as they follow the rules. I believe that moving in this direction is a step toward a safer future — one with less fear-mongering than the past — but this journey is going to take some time.

]]>
full large medium thumbnail
Streamlining the KYC procedure with blockchain | Kaspersky official blog https://www.kaspersky.com/blog/kyc-blockchain/27348/ Fri, 14 Jun 2019 16:45:39 +0000 https://www.kaspersky.com/blog/?p=27348 Last year, I wrote a post about the possible privacy implications of applying blockchain technology to such areas as education, health care, and human resource management. However, a blockchain-based solution to a problem plagued by the shortcomings of traditional approaches can help with dealing with personal data as well. I am talking about KYC (know your customer), and recent advances in using blockchain for KYC procedures.

The term “know your customer” originally came from financial services. Banks needed to identify their customers, make sure they didn’t cheat, and be able to check their credit history. So, banks developed a set of documents and a procedure that pretty much standardized the handling of loan requests.

Later, other areas of business adopted the model. If company A wanted to do business with company B, A would need a set of documents including (but not limited to): company B’s registration number, tax number, and bank routing and account number. Typically, the authenticity of these documents have to be verified by a third party, and that’s where bureaucracy kicks in.

Normally, processing the paperwork — making phone calls and requesting required confirmations — takes a lot of time and effort. That’s a problem. But sometimes clerks simply stamp the papers without thorough checking. Bigger problem.

When bureaucracy checks fail, the bureaucracy does a strange thing: introduces more checks. Take the banking industry, for example. In 2017, according to a Thomson Reuters survey, KYC procedures took an average of 32 days, up from 28 days in 2016.

Using digital signatures, once viewed as a possible solution to these problems, cannot obviate the authenticity checks of documents required by KYC procedures. And digital signatures can be forged or stolen.

Blockchain technology can dramatically reduce the time needed for authenticity checks. Commercial blockchain architects invented the term privileged nodes some time ago. In our example, when company A needs a set of documents about company B verified, it sends a request to a certification node (owned by the state or an authorized agent). The certification node checks that company B has provided the consent for the documents requested, checks the validity of the documents (i.e., that they have not expired) and only then returns them to company A.

All of this logic can be programmed into a smart contract and tweaked to the needs of the businesses involved. For example, the smart contract may be programmed to yield the full set of documents or an incomplete set of documents that haven’t expired and that company B has consented to provide. In the latter case company A can decide whether to take the risk of continuing to do business with company B, or to halt further operations.

The innate flexibility of smart contracts, in this case, gives businesses freedom of choice, which is essential for a market economy to flourish. For example, company B may not consent to providing a specific document to company A, but it can notify A that such a document exists and has been verified by a privileged node.

We can apply this approach to dealing with consumers’ personal data, where identity management is caught between the rock and hard place of antifraud measures and the GDPR. On the one hand, knowledge that a given consumer has a clean credit history verified by a credible authority is essential for antifraud measures, but on the other hand, banks have no need to keep or store that data.

Furthermore, consent can expire in smart contracts — so, for example, company A loses access to certain documents from company B once consent has expired. Sounds perfect, doesn’t it? Well, not really; we still need to be very careful what information is included in the blocks exchanged through the blockchain network. Even without those documents, the distributed database should contain some of their characteristics, such as checksums or hashes and expiration dates. The availability of this info, along with the data about requests and consent, on the distributed ledger can reduce the time required to pass the KYC procedures from days (or months, let’s be honest) to hours or even minutes!

What I’m talking about here is not a theory. IBM has supported the commercial blockchain platform Hyperledger since 2015, and the company has already demonstrated a working KYC proof-of-concept for a bunch of global banks, including Deutsche Bank and HSBC. In the B2C domain, NEC is actively pushing a one-step KYC solution that aims to improve the customer experience dramatically.

To make sure it all works as intended, we must pay close attention to the smart contract behind such a solution. Smart contracts are not immune to mistakes, and mistakes in this case will ruin the idea of secure automatization of the KYC. We already know that the current state of smart contract writing culture is far from perfect.

That is why the core of our blockchain security package is a smart-contract code review. Our antimalware experts identify known security vulnerabilities and design flaws as well as look for undocumented features of smart contracts. They present a detailed report  on any vulnerabilities detected and provide guidance on how to fix them. Learn more on the Kaspersky Blockchain Security webpage.

]]>
full large medium thumbnail
Why ICO security is a must | Kaspersky official blog https://www.kaspersky.com/blog/ico-security/26811/ Tue, 30 Apr 2019 10:47:10 +0000 https://www.kaspersky.com/blog/?p=26811 A rather naïve belief many blockchain enthusiasts share is that code backed by blockchain fabric is self-sufficient. “Code is Law,” as they say. Unfortunately, reality has already proved this maxim wrong, because, well, code is written by people, and people are prone to making mistakes. Even when machines write code, it’s still likely to contain flaws: For example, the exploitation of the DAO smart contract eventually led to a hard fork of Ethereum Classic from Ethereum. This sort of trouble has happened more than once and with more than one blockchain.

Problems are not limited to code flaws. From an information security perspective, blockchain systems — including nodes and wallets — are just software. And the people who use this software have a tendency to fall for social-engineering tricks. Some problems, such as the use of phishing to steal coins from wallets , can be solved with security software on the consumer side. Others cannot, such as people believing scammers who promise ROIs of hundreds of percent and then disappear.

Initial coin offerings (ICOs) remain popular among startups raising funds; the number of token sales is higher than it was back in 2017. At the same time, fraud did not diminish as crypto prices did. One estimate has losses from last year totaling $1.7 billion, up 400% from 2017 — the record-setting year for amount of single-incident losses. The most notable example, vulnerabilities in the Parity Wallet, resulted first in a loss of $30 million worth of Ethereum and then to the locking out of $154 million worth of Ethereum tokens by the removal of their data from the blockchain.

It got worse. In 2018, about $950 million was lost to theft from crypto exchanges and wallets, and another $750 million was lost as a result of fraudulent ICOs or token sales, exchange hacks, and other schemes. It’s no wonder regulators are catching up. The stance of such financial authorities as the US Securities and Exchange Commission is that tokens, especially those that assume the receipt of profits from the startup that organizes the sales of its tokens, should be treated as financial securities with all that implies, including criminal prosecution if things go south for investors (buyers of tokens). That is true as well for an STO (secondary token offering), so if you consider token sales a means to boost your business, we suggest you start thinking of selling tokens the way you would think about issuing securities. That means, stop a moment and think about security (pun intended).

The four major areas of risk for token sales: smart-contract vulnerabilities, staff wrongdoings, phishing attacks on investors, and operations security.

Smart-contract vulnerabilities

The lousiness of smart-contract writers is inexplicable. Estimates of several years ago claimed smart-contract code contained about six times as many bugs as commercial code. Based on 2018 stats, the situation seemingly has not improved.

From our perspective dealing with software flaws for more than two decades, studying smart contracts is actually quite similar to conducting application security testing. Sometimes it’s even simpler, because smart contracts are written in script language before compilation. There’s nothing new under the sun, really — you can see for yourself that most of the top mistakes people make have been long known in the “regular” software world. For example, recursive empty calls that led to DAO heist and subsequent Ethereum hard fork, or improper access control, as with Parity Wallet, are considered rookie mistakes in the world of information security.

It takes an attentive (and experienced) eye to look at the code sometimes, so don’t be too proud to ask an expert for a smart-contract review before you commit the code to blockchain — you will not be able to roll back any subsequent changes.

Staff wrongdoings

You might be expecting a traditional rant about humans being the weakest link in cybersecurity, but that’s not my point here. Instead, I want to focus on a goal of transforming employees into a “human firewall”  through effort and dedication to improving cyberhygiene. In fact, we’ve seen that in some organizations the number of incidents dropped 90% after our training.

Phishing attacks

Fame never comes alone, and once your ICO gains traction, you can assume phishing scammers will follow. Sometimes, as our analysis has shown, phishing sites pop up even before the official ones do. While it’s hard to take down phishing sites that target the buyers of tokens, you can still detect them and notify your current and potential investors. It’s better to have good fame than bad, right?

Operations security

For companies operating in the financial securities market, incident response capability and employee training are not luxuries; they’re absolute necessities. You may push the tasks off, figuring you’ll deal with them later, if regulators ever impose more restrictions. Well, in cybersecurity, you have to think “when,” not “if” — and add one important consideration: Assume an incident has already happened. That mindset will pay off in more ways than one, and what helps your reputation among investors today (you are helping them prevent losses, remember?) will save you from being slapped with fines — and, possibly, criminal charges — tomorrow.

You can learn more about solutions for ICOs and STOs here.

]]>
full large medium thumbnail
Is blockchain compatible with privacy? | Kaspersky official blog https://www.kaspersky.com/blog/blockchain-and-privacy/24427/ Wed, 31 Oct 2018 09:30:59 +0000 https://www.kaspersky.com/blog/?p=24427 Coming up on the tenth anniversary of Satoshi Nakamoto’s paper, do we really need yet another take on Bitcoin? Well, I think so. Today, I am going to focus on an aspect of this technology that needs more discussion — privacy.

The bedrock of blockchain — that every transaction is added into the history and written in “blocks” — has already backfired on more than one cybercriminal. The tremendous success of investigators in tracking down the perpetrators is a direct result of the history of their transactions being forever (as much as this adjective can be applied to the matter) inscribed in the chain of blocks. That, by the way, raises an important question: Why aren’t financial regulators embracing the cryptocurrencies already?

Of course, clarity is not always what we want. Consider privacy. This basic human right has been enshrined in the laws of many countries. In Europe, for example, the General Data Protection Regulation (GDPR) states that every person has a right to recall their consent at any time and permanently retrieve or delete any personal information they had previously agreed to share. How does that square with blockchain’s permanent record?

Here’s an example: Recently, I heard about a blockchain startup called MedRec. It enables medical practitioners to access patient data from different local storage systems. Of course, patient consent is required — but what happens if they change their minds?

To be fair, the demonstrated proof of concept didn’t keep the patient data on the blockchain itself — instead, the blocks contained information about the patient–provider relationship. But citizens of the EU are supposed to be able to revoke permission to use even that information — and, unless it’s stored on a privately held blockchain, they can’t. It’s worth noting that if the healthcare industry embraces the idea, then medical records will be kept in the public blockchain, because interoperability is a key issue for adoption.

Another example comes from the education sector. The University of Nicosia was the first educational institution to accept bitcoins as payment for their online courses. They went even further — they put the certificates of completion into the blockchain as well.

The intention is clear — that way, everyone who has specific info (namely, the hash) provided by owner of the certificate could check that they had indeed successfully completed the course. By design, this ledger contains only the hash, which is hard to reverse if you’re not an intended recipient, which means it has roughly the same level of pseudonimity as the bitcoin itself. As I stressed above, that has already proven to be useful in tracking down criminals.

Of course, the information that someone completed online courses may not be considered personal. I’m not going to argue that point here, only note that definitions of private and nonprivate information may evolve with time, but whatever’s on the blockchain is going to stay there.

Some startups go even further, pitching extra services for HR. They focus mainly on the idea of providing hiring managers with candidate information verified by a distributed ledger. This information, including entirely personal tidbits such as a person’s experience, previous jobs, and accomplishments, will be impossible to clear if people choose to retract their consent. Luckily, it seems that such startups have dropped off the radar. However, I would not be surprised if similar ideas resurfaced somewhere, somehow.

To conclude, I’d like to recall how we got here. Our understanding of which information is personal and which is not, has evolved along with the IT industry itself. Today we have a legal definition of “personally identifiable information,” which is a good start. But I believe that when applying blockchain to solving business problems, we should never forget about privacy as a basic human right.

If my data is on lots of different computers, how can it still be private? And if neither I, nor anyone else in particular, has direct control over all of those computers, what do I need to do to remove this data? Blockchain is great for lots of things, but not for everything. In the end, unremovable personal data is the opposite of privacy.

]]>
full large medium thumbnail
A platinum award from our customers | Kaspersky official blog https://www.kaspersky.com/blog/a-platinum-award-from-our-customers/19817/ Tue, 17 Oct 2017 09:28:59 +0000 https://www.kaspersky.com/blog/?p=19817 How can you get an objective look at your customers’ opinion of your company? From October 1, 2016 to September 1, 2017, our customers posted 199 reviews of our products on the Gartner Peer Insights portal. And based on our customers’ opinions, we received the Platinum Award as part of the 2017 Gartner Peer Insights Customer Choice Awards for Endpoint Protection Platforms.

About Gartner Peer Insights

Of course, the first thing that comes to mind when Gartner is mentioned is the company’s “Magic Quadrants.” One of the objectives of that format is to help customers choose the right solution. However, it’s important for businesses to know more than just what analysts think about IT products. They’re also interested in the opinion of direct users of these solutions in real life. So just a little over a year ago, Gartner presented something different — Gartner Peer Insights.

Unlike a Quadrant, where a company’s position in the market is depicted not by a rating, but by the relative position of a company on a graph, the essence of Gartner Peer Insights is that customers themselves, and not Gartner analysts, rate their vendors’ IT solutions and publish their surveys on a special website. A lot of factors affect the final rating: product quality, the sales department’s customer service, and the professionalism of the technical support desk.

Our solution was reviewed in the Endpoint Protection Platforms category and was recognized by end users.

How Gartner Peer Insights differs from other “survey ratings”

The key differences are credibility and transparency. Gartner doesn’t just collect comments about products; the company’s analysts also ensure that the surveys are written by real employees of companies where the solutions are actually used. All materials are available on the website, but without the authors’ names.

Gartner also ensures that comments from different regions and from companies of various sizes doing business in different industries are taken into account. This helps prevent companies with single-purpose solutions relevant only on a specific market and for a specific industry ending up in the rating. Gartner places special emphasis on the scale of the companies — at least half of the comments must be written by representatives of enterprise companies. Details of the Peer Insights methodology are available on the Gartner website.

We should add that this is the first time customer choice awards for the endpoint protection platform segment have been published, so we were the very first recipients of the platinum award in this category. We’re very grateful to Gartner and our customers for this honor.

The Gartner Peer Insights Customer Choice Award Logo is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customer Choice Awards are determined by the subjective opinions of individual end-user customers based on their own experiences, the number of published reviews on Gartner Peer Insights and overall ratings for a given vendor in the market, as further described here http://www.gartner.com/reviews-pages/peer-insights-customer-choice-awards/ and are not intended in any way to represent the views of Gartner or its affiliates.

]]>
full large medium thumbnail
Tips for owners of small and medium businesses | Kaspersky official blog https://www.kaspersky.com/blog/small-to-medium-business-guide-moiseev/18070/ Thu, 24 Aug 2017 13:00:42 +0000 https://www.kaspersky.com/blog/?p=18070 Every company that reaches a certain threshold has to face the challenge of transformation. That threshold might be reaching a certain revenue — for example, surpassing eight figures. At this point, technology companies have two ways to go.

The first is to drive average revenue per user upward. When you have reached a customer-base plateau, attracting more customers becomes too costly, so it makes sense to offer more value to existing customers through additional features, options, and so forth. Doing so requires having a good feedback channel; without hearing from your customers it’s impossible to understand their pain points and offer solutions.

That approach also requires a significant investment in development, and you may eventually reach a point where the investment simply does not pay off. The payoff doesn’t increase infinitely.

The second approach is to be disruptive, developing something different. For example, a company that makes cars could start making motorcycles — or electric vehicles, maybe autonomous as well. It’s a serious move that requires reconfiguring the production cycle and, basically, developing a new business.

But disruptions don’t just happen out of the blue. When talking with customers and understanding their needs, you may come to a point when the customer will request a “fork that is round.” Sometimes, if you just do what the customer wants (some may call this good customer care) you miss an important indicator — in this case, that what your customer needs is a spoon.

That’s what makes Kaspersky Lab an interesting business case. We’re constantly in the process of disruptive development through endless engagement with our customers. We don’t just listen, though — we educate our customers about the threat landscape. After each discovery our GReAT experts make, we have to make sure our customers know about the dangers and how those dangers may affect them. Essentially, the time when our customers could use “forks” to toss away small amounts of malware is long over, and today our “spoons” are well-automated with cloud-enabled machine learning techniques.

Enterprise sales is not about solutions; it is about partnerships.

One of the biggest misconceptions of the midsize companies that we had to overcome, was that selling to enterprises is similar to selling to SMBs. We’ve been selling our antivirus as a commodity for quite some time, and we saw that some enterprises were enjoying it. But we didn’t really understand their needs until we started treating our enterprise customers as partners rather than simply as buyers.

Our Ferrari partnership is an excellent example of how a partnership can transform a product line. Through this partnership, we’ve learned how to better protect Ferrari’s car-making infrastructure, something no other competitor was able to come up with in time. Now, using that experience, we can scale our transportation system solutions to other automotive vendors — and even beyond, to the car itself, through cooperation with AVL.

Another example is information security for financial organizations. Five years ago, we started talking with banks about their needs, and besides educating them about cyberthreats, we also learned their specific terminology. As a result, we now have a business division dedicated to providing information security specifically for financial organizations.

In both cases, we brought loads of cybersecurity experience to the enterprises but little knowledge of their business specifics. While educating them about cyberthreats, we listened to them and learned about their goals and obstacles, and that’s how we came up with new solutions.

And that’s what you can do, too.

]]>
full large medium thumbnail