Some time ago in this magazine, we talked about the possibility of A.I.’s stagnation in very specific matters regarding behavior and development models, that could take a long time to evolve and generate innovations truly impacting the field. However, this does not mean that the A.I. techniques currently available do not serve as tools to improve products and services (as in the field of Cybersecurity), but with the exponential use in recent years of these A.I.'s and the number of devices connected to the network with access to our personal and financial information, it is expected to see the cybercrime industry take advantage of these resources to carry out much more effective and massive attacks.

If we stop to think carefully, this is undoubtedly a nightmare scenario for computer security, because a malicious program with A.I. included, can learn and adapt to many defensive and obfuscation methodologies. The free artificial intelligence building blocks for training programs are available on Google's search engine and other online sources, and the ideas work very well in practice, it's only a matter of time before we begin to see a massive spread putting all users in danger, almost always ignorant of these issues.

Whoever leads A.I. will rule the world

The connotations that exist in the global expansion of A.I. range from apocalyptic to quite optimistic, depending on how we look at it. AI is opening the door to a set of new applications such as autonomous vehicles, facial and voice recognition (with a very controversial use concerning the generation of fake news), image and data analysis, and even today we can take for granted that A.I. is a very important part of measuring the “power” of a nation and modifying its scales in political or economic terms. But it is a fact that the more the A.I. evolves and proliferates, the greater its attack surface. Techniques such as advanced machine learning, and neural networks allow malware to find and interpret patterns for its advantage, of course from here they can also find and exploit an endless number of vulnerabilities in their targets.

For example, we bring up the concept of "hivenet", which is to infect a certain number of devices forming a beehive between them and with the techniques mentioned above, plus their computational power, they find a way to attack their target faster and massively than they could have done before. Due to the amount of devices we use daily, this concept represents a threat of great dimensions, mainly because of the versatility that can give an artificial intelligence to the "hivenet" embedding a very powerful malware, using learning models, being polymorphic, multifaceted and recovering from any suppression attempt to evade defenses and continuously improve its effectiveness. In short, it learns as it propagates and coordinates itself to execute global operations or upgrades necessary for its evolution.

This sounds totally like science fiction although it is already a reality, but also and as I let you glimpse, the skills of A.I. systems can lead us to scenarios of identity theft and deepfakes much more complex and difficult to elucidate. I kindly ask the reader to watch the fake video about Barack Obama, where it was managed to supplant in an extraordinary way "the person" of the former president to send a false message totally credible and with an identical voice and articulation. They are bots that entirely supplant people, capable of imitating anyone's voice and learning to speak in the same way, even using the same verbal expressions. Maybe if a hacker had this kind of bot calling uninterruptedly to a database of people, he could have passwords, bank account access codes or card numbers of thousands of people in a very short time.

Another case is Grover, the best fake news generator of the moment. The system has learned through online training to create false news from a headline. Based on that, it is able to generate images of supposed media articles such as The Washington Post and The New York Times, which can then be shared on social networks. Grover is able to change its style to each media so that it is not evident that the article has not been published on its website; for example, copying the way in which an author has initiated a real article. Using this tool, researchers created fake articles from The Washington Post claiming the U.S. Congress voted to remove Donald Trump from the presidency. With the style of the article, the supposed "information” and the use of quotations, it really does seem like a real article. And although false news can be a very real problem, it is also a very "new" problem, hence the current efforts to detect fake news are quite dangerous, since they only move our trust from one machine to another, without forcing us to assume the cost of searching for truthful and reliable information.

Relying on automated systems to censor text generated by A.I. would imply ending up also censoring comments made by humans, since most of the communicative acts that we generate today on the Internet are mere "babbling" or very banal grammatical threads (see all Twitter or Facebook threads) repeated ideas without any understanding or depth. Exactly what A.I. can already generate.

A current case of malware with A.I. is undoubtedly DeepLocker, a type of threat that is transmitted by videoconferencing software and remains inactive until it reaches a specific victim, identified through factors that include facial recognition, geolocation, voice recognition and potentially the analysis of data collected from sources such as online crawlers and social networks. DeepLocker's deep neural network (DNN) model stipulates the activation conditions for executing a payload. If these conditions are not met, and the target is not found, then the malware remains blocked, something perfectly well defined within the A.I. environment.

To conclude this brief text, which was intended to put into the reader's orbit this subject so important for the future of humanity without doubt, we can indicate some key points about the objectives or purposes that A.I. can achieve. If used in a negative way and causing great global havoc, some items already mentioned in the text:

  • The ability to generate images can lead to impersonation or the publication of false content to create panic, chaos and social confusion, depending on the level of depth at which it is used.
  • Programs looking for vulnerabilities in devices and networks can be used to attack and take advantage of these vulnerabilities generating automatic penetrations on a large scale.
  • Autonomous drones being developed for delivery, such as Amazon's, can be hacked to transport bombs or weapons with the same ease.
  • The automation of tasks prevents psychological factors such as empathy from coming into play when making decisions. At the same time, anonymity is reinforced, which can be a double-edged knife in cyberspace.
  • A.I. systems are always efficient and scalable. The more they are used, the more they will learn. This implies that once frameworks or A.I. systems proliferate, cybercriminals will find a gold mine to develop, due to the unlimited polyvalence capabilities offered by this kind of tools and their investment is more than profitable.
  • Added to the above there are few (almost null) antimalware engines that can do something against a malware with A.I. They are completely exceeded and let's not even talk about firewalls or IDS/IPS.
  • In the end, if research continues in this field and companies invest to increase their profits with less personnel, in an indeterminate period of time, Artificial Intelligence will end up surpassing human beings in almost every professional field.