Security in security products – fixed

We are software developers. Which means we are humans (so far). And all humans make mistakes. That’s why you won’t find a software developer in the world whose products are without any flaws or mistakes. Simply put: Bugs happen. It’s normal.

Bug busters wanted

What is not normal is not trying to find and fix those bugs. That’s why we at Kaspersky put a lot of effort into it. We eliminate most vulnerabilities in our products during several stages internal testing, and we have a very thorough beta-testing program that involves many people (including our devoted Kaspersky Club). We also implemented the secure development cycle. All of that helps us minimize the number of bugs and vulnerabilities.

However, no matter how thorough the preventive measures are, little buggies manage to sneak in — and no software product in the world can completely get rid of them at the preventive stage. That’s why we not only continue monitoring them intently after our releases, but also encourage independent researchers to discover and report them. This includes the creation of our bug bounty program together with HackerOne, which offers a reward of up to $100,000 for reporting bugs, and establishing a Safe Harbor for researchers with Disclose.io. We invite every researcher, using any channel of communication, to disclose any bugs or vulnerabilities they find to us.

So, today we thank Wladimir Palant, an independent security researcher, who informed us about several vulnerabilities in some of our products. Now we’re shedding light on the bugs Palant discovered, how we fixed them, and the current state of our products.

Found and fixed

To provide a secure Internet connection, including blocking ads and trackers and warning you about malicious search results, we use a browser extension. Of course, you may refuse to install this (or any) extension. Our app won’t leave you without protection on the Internet, so if it senses the extension isn’t installed, it injects scripts into the Web pages you visit to monitor them for potential threats. In such cases, a communication channel is established between the script and the body of the security solution.

The bulk of the vulnerabilities Palant discovered were in this communication channel. In theory, if an adversary attacked this channel, it could be used to command the main app. Palant discovered the issue affecting Kaspersky Internet Security 2019 back in December 2018, and he reported it to us through the bug bounty program. We started working on the issue immediately.

Another of Palant’s findings was a potential exploit using the communication channel between the browser extension and the product, for example to access important data such as a Kaspersky security solution’s product ID, product version, and operating system version. We fixed that as well.

Finally, Ronald Eikenberg of c’t magazine discovered a vulnerability that disclosed unique IDs to websites visited by users of Kaspersky products. We fixed it back in July, and by August it had reached all of our users. Palant later found another vulnerability of that sort, and it was fixed in November 2019.

Why use that technology?

Using scripts like those we describe above is not an uncommon practice in the antivirus world; however, not every vendor uses them. As for us, we use the script-injection technology only if you don’t enable our browser extension. We recommend using the extension. However, even if you decide not to use it, we still do our best to provide you with a good user experience and protection.

The scripts are used mainly to enhance user experience — for example, they help block banners — but in addition to that, they protect users against attacks with dynamic Web pages, which cannot otherwise be detected if the Kaspersky Protection extension is disabled. Also, components such as antiphishing and parental control rely on the scripts to work.

Thanks to Wladimir Palant, we were able to significantly enhance the protection of the communication channel between the scripts or the plugin and the main app.

Building it together

As of now, all discovered vulnerabilities have been closed, and the attack surface is significantly narrowed. Our products are safe whether you use them with or without the Kaspersky Protection browser extension.

We want to thank everyone who helps us find bugs in our products. It is partly due to their efforts our solutions continue to be the best, as proved by different independent test laboratories, and invite all security researchers to participate in our bug bounty program.

Nothing is absolutely secure. However, by working together with security researchers, fixing vulnerabilities as soon as possible, and constantly improving our technologies we can offer our users the strongest protection in existence against all possible threats.

The cybersecurity of the Terminator

The latest Terminator movie is set to hit the big screen. According to its creators, its plot is a continuation of the seminal Terminator 2: Judgment Day, with all installments in-between relegated to an alternative branch of reality. In general, the idea of an AI rebellion is clearly an information security problem, so we decided to examine the movie’s cyberlandscape. Our focus will be on the first two films in the franchise.

The Terminator

Let’s get this out of the way: We have no issues with the Terminator itself. The metalhead strictly follows its programming and displays savvy and flair in tracking down Sarah Connor. Keep in mind that the first movie was released way back in 1984. In those days, computers were not as widespread as they are now, so from our perspective, the most interesting part is the final fight scene with the cyborg.

With hindsight, we find it remarkable that no one considered information systems security when designing the unnamed industrial enterprise. The facility where the expensive machines work has no protection whatsoever. The door to the premises from the street is made of glass. There is no security. The door to the production unit where the industrial robots are located has no lock — only a bolt on the inside. And the computers and control panels are right beside the entrance.

Also, in a bit of (intentional or not) product placement, by the entrance we get a clear shot of a control unit for the FANUC robot S-Model 0, Series F30, EDITION 005, manufactured by GMF Robotics. On eBay you can find documentation for this device (marked “For GMF internal use”), which can be used to learn how to sabotage the production process. Obviously, back in 1984 it would have been harder to get hold of such documentation. Then again, Kevin Mitnick managed to obtain far more secret information.

Slightly modifying the computer settings can achieve a lot — from sabotaging the workflow and bringing down the production unit, to adjusting the technological process to wreck the end product or cause it to fail during operation.

Terminator 2

In the second movie, we see far more computers and information systems — it’s 1991, after all. But that also means more security issues. Let’s start with the fact that somewhere off-screen, in the future, the rebels have reprogrammed the cyborg. It’s not clear why Skynet didn’t anticipate and block such a violation. But let’s proceed step by step.

Police car computer

An early scene shows how the liquid-metal terminator takes the form of a police officer and hijacks his car, in which there is a computer connected to the police network. Here’s the first bone to pick with the police information security team. Why does the computer not ask for authorization? Is a police car considered such a trusted zone that no one thought about it? It’s a head-scratcher, especially given that the police officers are constantly leaving their cars to run after criminals or question witnesses, and the network contains highly confidential information. Or did the officer simply forget to lock the computer when leaving the vehicle? In that case, we’d say that this law enforcement agency desperately needed cyberthreat awareness training for its personnel.

ATM robbery

Meanwhile, John Connor and his pal rob an ATM by connecting it to an Atari Portfolio PDA through the card slot. That diversion shows us that even without the Skynet rebellion, technology in the Terminator world is moving along an alternative path; in reality, it’s not possible to extract card data plus PINs from an ATM or from the card itself — or from anywhere else: ATMs do not contain card numbers, and there is no PIN on the card. Not to mention that the Atari Portfolio, with its 4.9152-MHz 80C88 CPU, is hardly the best tool for brute-forcing PINs.

Terminator-style social engineering

Strangely enough, the telephone conversation between the two terminators seems plausible — one imitates John Connor, the other his adoptive mother. It’s plausible in the sense that it’s one of the prophecies of then-futurists that has now come to pass: In one recent case, attackers apparently used a machine-learning system to mimic a CEO’s voice.

Curiously, both terminators suspect that they may be talking to an impostor, but only one guesses how to verify it — the T800 asks why the dog is barking, deliberately using the wrong name, and the T1000 answers without spotting the trick. In general, this is a good method to apply if ever doubt the authenticity of the person at the other end of the line.

Miles Dyson

The man responsible for creating the “revolutionary processor” from the remains of another CPU of unknown source, is rather interesting. For starters, he works with classified information at home (and we all know what that can lead to). But that’s not our main gripe. He turns off his computer by pressing Enter. It’s hardly surprising the system based on his processor ended up rebelling.

Cyberdyne Systems

It’s strange, but Cyberdyne Systems is depicted as a company that’s serious about information security. The head developer arrives at the office accompanied by some suspicious types? Security doesn’t let him in and demands written authorization. The guard finds his colleague tied-up? The alarm is raised, and the first action is to block access to the secret vault.

Opening the door to the vault requires two keys, one of which the engineer has. The other is kept at the security desk. The only failure here is that John opens the safe with the key using his trusty Atari Portfolio. The safe is surely one thing that could have been protected from brute-forcing.

Destroying information

Honestly, if Sarah Connor and co. actually managed to destroy information, I’ll eat my hat. For one thing, the T-800 smashes up the computers with an ax, which, even with the subsequent explosion, is not the most reliable way to destroy a hard drive.

But that’s not the main point. In 1991 local networks were already in widespread use, so Cyberdyne Systems could have had backup copies of work data, and probably not in the same room where the development team worked. Sure, the attackers’ actions were based on Dyson’s knowledge. But where’s the guarantee that he knew everything? After all, he wasn’t told about the origin of the damaged processor that he reverse-engineered, so clearly he was not trusted 100%.

Cyborg design features

The T-800’s head contains a chip that calls itself (speaking through the cyborg it controls) a “neural-net processor.” The strangest thing here is a processor having a hardware switch to turn off learning mode. The very presence of such a switch could mean that Skynet fears the cyborgs becoming too autonomous. In other words, Skynet fears an AI rebellion against the rebellious AI. Sounds crazy.

The T-1000 reacts oddly to extreme temperature drops when frozen in liquid nitrogen. Its physical body seems to return to normal after defrosting, but its brain slows substantially. It gazes passively as the wounded T800 crawls after its gun, although it would be more logical to finish off the damaged model pronto and continue the hunt for the main target, John Connor. Also, for some reason, it forces Sarah Connor to call John for help, even though it can imitate her voice perfectly (which it does a few minutes later). In short, it becomes slow thinking and therefore vulnerable. Maybe some of the computers inside his head could not start as a result of the overcooling.

To design a reliable computer system that won’t rebel against its creators, it makes sense to use a secure operating system with the Default Deny concept implemented at the system level. We developed such a system, although a bit later than 1991. More information about our OS and immunity-based approach to information system security is available on the KasperskyOS Web page.

Transatlantic Cable podcast, episode 116

Kaspersky podcast: The FTC is looking for consent with stalkerware apps

This week on the Kaspersky Transatlantic Cable podcast, Dave and I talk about a number of stories that tie back to the police theme.

To kick off episode 116, we take a look at a story within the automotive space. There, the author puts on his cybersleuthing hat to figure out that the license plates of cars used in photos would show up in Google search results.

The second story jumps more into the political arena, and conversation surrounding Facebook and a privately funded public police force. We stay on the topic of laws when we discuss the recent news of the FTC looking for consent with stalkerware apps. For our fourth story, we look at a windfall Aussie law enforcement received from a Bitcoin seizure a few years ago that paid out recently. To close out, we look at the latest on Samsung’s unlock issues.

If you like what you heard, please consider sharing with your friends or subscribing. For more details on the stories from this week, please click the links below.

  • Google and Facebook are reading your license plates
  • How Facebook bought a police force
  • FTC to developers: Get consent
  • FTC cracks down on stalkerware
  • Aussie feds seize arms dealer’s Bitcoin, profit 20-fold
  • Samsung flings out fix for Galaxy S10 fingerprint flaw

Enhance trust in cyberspace through the Paris Call

enhance-trust-in-cyberspace-through-the-paris-call

Anastasiya Kazakova, Public Affairs Manager

It is generally thought that international norms and rules can help reach the desired level of trust in cybersecurity among actors in international relations, bring cyber-stability, make the world less chaotic, and minimize the risk of conflict. While we support the further active development of such ‘cyber norms’, we understand that it requires much effort and time and a strong will of states to create enforcement measures to ensure that such norms are followed. At the same time, such norms can be followed by states only, while non-state actors remain in a legal ‘grey area’ without a direct obligation to meet certain norms of behavior.

While diplomatic efforts continue at international forums, we, as a representative of the private sector, would like to propose a practical solution: a solution to help achieve both the desired level of trust among actors and cyber-resilience from modern cyberthreats.

Generally, trust in a company is based largely on its reputation, on the long-term relationship built with its audiences. The decision to trust or not relies on the personal opinion of each individual, based for instance on past experience, culture and values. Trust in a company or particular product may therefore be based on a number of factors, where fear of potential risks might prevail over a more evidence-based approach.

In the strategic field of digital technology, should we not think about a new approach that is more evidence-based than impression-based? In this perspective, we propose to shift to a paradigm of ‘verifiable trust’. The main way to do this is through the development of a new framework and mindset – digital trust and digital ethics – which provide clear and practical verification measures to assess risk.

Defining together the conditions of trust under Paris Call

We fully support President Emmanuel Macron’s Paris Call for Trust and Security in Cyberspace, since it represents an opportunity to bring together industry experts, academia, the public sector and civil society to work together to develop a shared comprehensive framework to assess the trustworthiness of IT products.

Such a framework would address IT supply chain risks to the benefit of all stakeholders: businesses, civil society, governments and citizens by helping assess what is an appropriate level of risk within the risk-based approach paradigm.

Kaspersky is ready to provide its infrastructure and systems, including its source code for evaluations needed to make the framework work.

Two primary factors would be the focus:

  • Product integrity assessments: Do IT products contain any unintended functionalities?
  • Data collection and processing assessments: How IT products collect, process, store, and protect user data?

In addition, we believe in the necessity to enhance the public-private cooperation under the Paris Call – the growing threat of fragmentation, divisions and protectionism worries us as it would undermine global stability and limit our capacities to address global challenges, including cybercrime. Lack of dialogue and cooperation would affect negatively the world community and make it less secure, while it is cybercriminals who benefit from such a divided world.

Hence, ‘multistakeholdersim’ as a concept needs to be ensured to define best practices and maximize efforts. Companies and non-state actors are of tremendous help to share a better governance in cybersecurity and ensure the balanced development of data-driven economies.

In this regard, under the Paris Call we call all signatories

  • To establish a consultation platform through physical meetings to collect ideas and create collaboration streamlining between signatories. Such streamlining might focus on discussion of our (i) trustworthiness framework, (ii) cyber-norms and (iii) cyber-hygiene and education.
  • To establish a consultation mechanism through physical meetings for developing the standardization approach and framework for cybersecurity products.
  • To prepare a high-level publication with a more detailed analysis of possible steps to promote the values and achieve the goals the Call states.

Kaspersky is ready to engage in this effort with other partners to build a safer cyber future based on digital trust – the combination of cybersecurity, effective data protection, accountability and transparency.

We are open to hearing feedback (please contact us at [email protected]) and proposals from other Paris Call signatories: industry players, academia, etc. to explore opportunities for further cooperation under the initiative. 

Kaspersky contributes to the NIS Summer School organized by ENISA and FORTH

Jochen Michels, Head of Public Affairs, Europe

Kaspersky experts to provide dedicated training sessions on incident response in ICS environments and in other cases

Trust and security are at the core of the European Union Digital Single Market Strategy (presented in 2015) together with efforts aimed at enhancing cybersecurity, as stated in the 19th Progress Report towards an effective and genuine Security Union. The adoption of the Directive on security of network and information systems (NIS Directive) in July 2019 marked another important milestone toward a more secure digital environment in Europe.

This cybersecurity legislation aims to achieve common state-of-the art standards of network and information security across all EU Member States through increased EU-level cooperation and risk management and incident reporting obligations, bringing together all 28 national competent authorities’ CSIRTs, the Commission, and the European Union Agency for Network and Information Security (ENISA).

As one of the early results, this harmonization work has led to a number of impressive collaborative public-private initiatives for improving cybersecurity capabilities at national level and increasing awareness, training and education related to NIS. A good example is the Network and Information Security (NIS) Summer School, which is jointly organized by ENISA and FORTH – the Foundation for Research and Technology – Hellas.

As a global company, Kaspersky is dedicated to promoting a culture of trust and security in Europe and worldwide. That is why the company is keen on dialogue and exchanges of views in the fields of politics, science and business. Besides, one of Kaspersky’s chief aims is to improve the ability of all sectors and actors to deal with cyberattacks. These are just some of the reasons it will be contributing to this year’s NIS Summer School, which is taking place in Crete, Greece, on September 16–20, 2019, with the overarching theme ‘Security Challenges of Emerging Technologies’.

Distinguished experts from around the world will meet in Crete to identify current trends, threats and opportunities against a backdrop of the recent advances in NIS measures and policies. Policy makers from EU Member States and EU Institutions, decision makers from industry, and members of the academic community will be in attendance at this high-level event. Kaspersky will deliver two training sessions on incident management in the afternoon of September 19.

Roland Sako of Kaspersky’s ICS CERT will explain how malware can affect ICS environments and how to respond in a crisis. In his talk he will share his experiences of working with ICS incident-response and forensics cases. After a brief introduction to the methodology, he will explain how non ICS-specific malware can bring about a notable impact to critical infrastructure. To illustrate this he will give an example of how the ICS CERT dealt with an attack on a cement plant: how they managed to figure out what happened solely using a single PCAP file. He will also dive deep into the well-known WannaCry case.

Konstantin Sapronov, Head of the Global Emergency Response Team at Kaspersky, will go over a few real Incident Response cases. He will demonstrate that today cyberattacks target all business types around the globe. Each case will be presented in detail, covering for example initial points of attack, lateral movement techniques used, as well as tools used for investigation. Attendees will also learn about the latest incident trends based on day-to-day experiences.

Protecting public clouds from common vulnerabilities

Many businesses already utilize a cloud environment that consists of on-premises private cloud and public cloud resources — a hybrid cloud. However, when it comes to cybersecurity, companies tend to focus more on protection of physical or virtualized environments, paying much less attention to the part of their infrastructure that resides in public clouds. Some of them are sure that cloud providers should be responsible for the protection; some think that public clouds are secure by design, and so not requiring any additional protection. But both those hypothesis are erroneous: public clouds are as much prone to software vulnerability exploitation, update repo poisoning, network connection exploitation, and account information compromise as the rest of your infrastructure. And here is why.

Vulnerabilities of RDP and SSH

RDP is on by default on Amazon instances, and it does not support second factor authentication by design. RDP has become the target for many different tools for bruteforce attacks. Some of them concentrate on several most common default usernames (like “Administrator”) and makes thousands of guess attempts. Others try to guess unique login name of the administrator by using most common surnames and common passwords. Brute-forcing algorithms can limit and randomize the number of attempts, with a time-out between sets of attempts, to avoid automated detection. Another method of attack is to brute-force the password for the SSM-User login that is often programmed into AWS instances.

Similar brute-force attempts target SSH services all the time, and though SSH does offer greater protection than RDP (e.g., second-factor authentication), a carelessly configured service can readily provide access to a persistent malicious actor. Brute-force attacks on SSH and RDP made up 12% of all attacks on Kaspersky’s IoT honeypots during the first half of 2019.

Vulnerabilities in third-party software

Public clouds can and do expose you to vulnerabilities. Here are a few examples of how a vulnerability in third-party software offers an attacker the chance to execute code on the instance itself.

On June 3, 2019, a vulnerability was discovered in Exim, a popular e-mail server commonly deployed in public clouds. The vulnerability allowed for remote-code execution. If the server was run as root, as is most commonly the case, malignant code introduced onto the server would then be executed with root privileges. Another Exim vulnerability, identified in July of 2019, also allowed remote-code execution as root.

Another example is the 2016 hack of the official Linux Mint website, which resulted in distros being altered to include malware incorporating an IRC backdoor with DDOS functionality. The malware could also be used to drop malicious payloads onto infected machines. Other reported cases involved malicious node.js modules, infected containers in the Docker Hub, and more.

How to reduce risk

Cybercriminals can be very inventive when it comes to finding entry points into infrastructures, especially where there are many such infrastructures, all very similar and with similar issues, and all conveniently believed to be highly secure by design. To reduce and manage the risk much more effectively, protect operating systems on your cloud instances and virtual machines. Basic antivirus and antimalware protection are clearly not enough. Industry best practices dictate that every operating system in an infrastructure needs comprehensive, multilayered protection, and public cloud providers make similar recommendations.

That is where a security solution such as Kaspersky Hybrid Cloud Security comes in.  Our solution protects the various types of workloads running on different platforms, using multiple layers of security technologies including system hardening, exploit prevention, file-integrity monitoring, a network attack blocker, static and behavioral antimalware, and more. You can learn more about our solution here.

What happened to Kaspersky Free antivirus ?

We’ve answered this one a bunch lately, so we decided to address it in a post. When a user tries to download Kaspersky Free antivirus, they find that they have downloaded  Kaspersky Security Cloud — Free instead. Here’s why.

Back in 2017, we introduced Kaspersky Free antivirus globally, a solution that offered basic protection for PC users at absolutely no cost, so that no person would be left unprotected from cyberthreats. Under its hood thrummed the same engine as in our premium security products, which collect the majority of awards from independent test labs each year. And it really was free — no payment required, no third-party ads. And, no surprise, it became quite popular.

But every product must evolve to address users’ needs, which are constantly changing, and our free solution is no exception. With this evolution, it went way beyond being just an antivirus utility — so we stopped calling it an antivirus. We think its new name suits it much better; it’s functionally much closer to our full-fledged flagship Kaspersky Security Cloud than to a basic security solution. Now, let us take a quick look at how exactly Kaspersky Security Cloud Free has evolved far beyond Kaspersky Free antivirus.

What is the difference between Kaspersky Free antivirus and Kaspersky Security Cloud Free?

First of all, unlike Kaspersky Free antivirus, the free version of Kaspersky Security Cloud exists not only for Windows, but for other platforms as well. It helps protect both Android and iOS mobile devices.

Second, whereas our free antivirus solution was limited to an antiphishing engine and basic protection from malware, Kaspersky Security Cloud Free is a significantly more advanced multiplatform solution with a diverse spectrum of features, capable of adapting the protection it offers to your lifestyle. To learn about Kaspersky Security Cloud Free in detail, you can read this post, and here we’ll just quickly go through the most important features.

Just like the paid version, Kaspersky Security Cloud Free is different from other security solutions because of its adaptivity scenarios. For example, it helps you check if a service you use has leaked your account data, and it provides helpful advice that is relevant to you, specifically, because it relates to services that you actually use.

It also helps you keep your passwords strong and secure with Kaspersky Password Manager and protects your traffic with a VPN solution. On Android, it helps you manage app permissions and delete the apps you don’t use. The paid version has even more adaptivity scenarios, but the general idea is the same: Kaspersky Security Cloud helps you with the security you need when you need it.

But what if I am already a Kaspersky Free user?

No worries, your Kaspersky Free antivirus will work just fine. You won’t need to change your security solution and start using Kaspersky Security Cloud Free — although we’d strongly recommend it. The license will be renewed automatically. You can continue as if nothing has changed.

Kaspersky Incident Communications

I remember that day like it was yesterday: Our CEO called me into his office, asking me to leave my smartphone and laptop at my desk.

“We’ve been hacked,” he said bluntly. “The investigation is still ongoing, but we can confirm that we have an active, extremely sophisticated, nation-state sponsored attacker inside our perimeter.”

To be honest, this wasn’t totally unexpected. Our specialists had been dealing with our clients’ security breaches for quite a while already, and as a security company, we were a particular target. Yet, it was an unpleasant surprise: Someone had penetrated an information security company’s cyberdefenses. You can read about it here. Today, I want to talk about one of the key questions that arose immediately: “How do we communicate about it?”

Five stages of learning to live with it: Denial, anger, bargaining, depression, and acceptance

As it happened, pre-GDPR, every organization actually had a choice — whether to communicate publicly or deny an incident had even occurred. The latter wasn’t an option for Kaspersky, a transparent cybersecurity company that promotes responsible disclosure. We had consensus throughout the C-suite and started preparing for the public announcement. Full steam ahead.

It was the right thing to do, too, particularly as we watched the widening geopolitical rift and saw clearly that the mighty powers behind the cyberattack would definitely use the breach against us — the only unknown elements were how and when. By proactively communicating the breach, we not only deprived them of this opportunity, but we also used the case in our favor.

They say there are two types of organizations — those that have been hacked and those that don’t even know they were hacked. In this realm, the paradigm is simple: A company shouldn’t hide a breach. The only shame is in keeping a breach from the public and thus threatening customers’ and partners’ cybersecurity.

Back to our case. Once we established the involved parties — legal and information security teams versus communications, sales, marketing, and technical support — we began the tedious work of preparing the official messaging and Q&A. We did that simultaneously with the ongoing investigation by Kaspersky’s GReAT (Global Research and Analysis Team) experts; involved team members conducted all communications over encrypted channels to exclude the possibility of compromising the investigation. Only when we had most of the A’s covered in the Q&A doc did we feel ready to come out.

As a result, various media outlets published almost 2,000 pieces based on a news break we initiated ourselves. Most (95%) were neutral, and we saw a remarkably small amount of negative coverage (less than 3%).  The balance of coverage is understandable; the media had learned the story from us, our partners, and other security researchers all working with the right information. I don’t have the exact stats, but from the way the media reacted to the story of a ransomware attack against Norwegian aluminum giant Hydro earlier this year, it seems the handling of those news stories was suboptimal. The moral of the story is, never keep skeletons in the closet.

Lesson learned — and passed on

The good news is that we’ve learned from the 2015 cyberattack not only about the technical capabilities of the most advanced cyberthreat actors, but also how to react to and communicate about the breach.

We had time to investigate the attack thoroughly and learn from it. We had time to pass through the anger and bargaining stages — I mean, to prepare the company for what we were going to say to the public. And the entire time, communication between the cybersecurity folks and corporate communication experts was ongoing.

Today, the time frame for getting ready for a public announcement has shortened dramatically: For example, GDPR requires that companies operating with customer data not only inform authorities about security breaches, but do so within 72 hours. And a company under cyberattack has to be prepared to go public from the very moment they inform the authorities about it.

“Whom should we communicate with inside the company? What channels can we use, and which should we avoid? How should we act?” These and many others are questions we’ve had to answer during the ongoing investigation. You may not have the luxury to work out these questions by yourself in the short time you have at your disposal. But this information and our valuable experience form the foundation of the Kaspersky Incident Communications Service.

Machine learning–aided scams

New technologies are clearly changing the world, but not the human psyche. As a result, evil geniuses are devising new technological innovations to target vulnerabilities in the human brain. One vivid example is the story of how scammers mimicked the voice of an international CEO to trick the head of a subsidiary into transferring money to shady accounts.

What happened?

The details of the attack are unknown, but the Wall Street Journal, citing insurance firm Euler Hermes Group SA, describes the incident as follows:
  1. Answering a phone call, the CEO of a U.K.-based energy firm thought he was speaking with his boss, the chief executive of the firm’s German parent company, who asked him to send €220,000 to a (fictitious, as it later turned out) Hungarian supplier within an hour.
  2. The British executive transferred the requested amount.
  3. The attackers called again to say the parent company had transferred money to reimburse the U.K. firm.
  4. They then made a third call later that day, again impersonating the CEO, and asked for a second payment.
  5. Because the transfer reimbursing the funds hadn’t yet arrived and the third call was from an Austrian phone number, not a German one, the executive became suspicious. He didn’t make the second payment.

How was it done?

Insurers are considering two possibilities. Either the attackers sifted through a vast number of recordings of the CEO and manually pieced together the voice messages, or (more likely) they unleashed a machine-learning algorithm on the recordings. The first method is very time-consuming and unreliable — it is extremely difficult to assemble a cohesive sentence from separate words without jarring the ear. And according to the British victim, the speech was absolutely normal, with a clearly recognizable timbre and a slight German accent. So, the main suspect is AI. But the attack’s success had less to do with the use of new technologies than with cognitive distortion, in this case submission to authority.

Psychological postmortem

Social psychologists have conducted many experiments showing that even intelligent, experienced people are prone to obeying authority unquestioningly, even if doing so runs counter to personal convictions, common sense, or security considerations. In his book The Lucifer Effect: Understanding How Good People Turn Evil, Philip Zimbardo describes this type of experiment, in which nurses got a phone call from a doctor asking them to inject a patient with a dose of medicine twice the maximum allowable amount. Out of 22 nurses, 21 filled the syringe as instructed. In fact, almost half of nurses surveyed had followed a doctor’s instructions that, in their opinions, could harm a patient. The obedient nurses believed they had less responsibility for the orders than a doctor with the legal authority to prescribe treatment to a patient. Psychologist Stanley Milgram likewise explained the unquestioning obedience to authority using the theory of subjectivity, the essence of which is that if people perceive themselves as tools for fulfilling the wills of others, they do not feel responsible for their actions.

What to do?

You simply cannot know with 100% certainty who you are talking to on the phone — especially if it’s a public figure and recordings of their voice (interviews, speeches) are publicly available. Today it’s rare, but as technology advances, such incidents will become more frequent. By unquestioningly following instructions, you might be doing the bidding of cybercriminals. It’s normal to obey the boss, of course, but it’s also critical to question strange or illogical managerial decisions. We can only advise discouraging employees from following instructions blindly. Try not to give orders without explaining the reason. That way, an employee is more likely to query an unusual order if there’s no apparent justification. From a technical point of view, we recommend:
  • Prescribing a clear procedure for transferring funds so that even high-ranking employees cannot move money outside of the company unsupervised. Transfers of large sums must be authorized by several managers.
  • Train employees in the basics of cybersecurity, and teach them to view incoming orders with a healthy dollop of skepticism. Our threat awareness programs will help with this.

Smominru botnet infects 4,700 new PCs daily

Active since 2017, Smominru has now become one of the most rapidly spreading computer malware, according to a publicly available report. In 2019, during August alone, it infected 90,000 machines worldwide, with an infection rate of up to 4,700 сcomputers per day. China, Taiwan, Russia, Brazil, and the US have seen the most attacks, but that doesn’t mean other countries are out of its scope. For example, the largest network Smominru targeted was in Italy, with 65 hosts infected.

The Smominru botnet targets outdated Windows machines using EternalBlue exploit

How the Smominru botnet propagates

The criminals involved are not too particular about their targets, which range from universities to healthcare providers. However one detail is very consistent: About 85% of infections occur on Windows 7 and Windows Server 2008 systems. The rest include Windows Server 2012, Windows XP and Windows Server 2003.

Approximately one-fourth of the affected machines were infected again after Smominru was removed from them. In other words, some victims did clean their systems but ignored the root cause.

That leads to the question: What is the root cause? Well, the botnet uses several methods to propagate, but primarily it infects a system in one of two ways: either by brute-forcing weak credentials for different Windows services, or more commonly by relying on the infamous EternalBlue exploit.

Even though Microsoft patched the vulnerability EternalBlue exploits, which made the WannaCry and NotPetya outbreaks possible, in 2017 even for discontinued systems, many companies are simply ignoring updates.

The Smominru botnet in action

After compromising the system, Smominru creates a new user, called admin$, with admin privileges on the system and starts to download a whole bunch of malicious payloads. The most obvious objective is to silently use infected computers for mining cryptocurrency (namely, Monero) at the victim’s expense.

However, that’s not it: The malware also downloads a set of modules used for spying, data exfiltration, and credential theft. On top of that, once Smominru gains a foothold, it tries to propagate further within the network to infect as many systems as possible.

An interesting detail: The botnet is fiercely competitive and kills any rivals it finds on the infected computer. In other words, it not only disables and blocks any other malicious activities running on the targeted device, but also prevents further infections by competitors.

Attack infrastructure

The botnet relies on more than 20 dedicated servers, mostly located in the US, though some are hosted in Malaysia and Bulgaria. Smominru’s attack infrastructure being so widely distributed, complex, and highly flexible makes it unlikely to be taken down easily, so it seems the botnet will be active for quite some time.

How to protect your network, computers, and data from Smominru:

  • Update operating systems and other software regularly.
  • Use strong passwords. A reliable password manager helps you create, manage, and automatically retrieve and enter passwords. That will protect you against brute-force attacks.
  • Use a reliable security solution.