Experts wonder what will happen when we succeed in building computers that are smarter than us, how it will work and why it must be done in the right way to ensure that the human race does not disappear. Are we the smartest possible biological species capable of launching a technological civilization?

The idea of some form of artificial intelligence has been around since the 1950s when the first digital computer was completed, and today it seems that we are getting closer to a world where computers are as smart as humans, or even smarter. Artificial intelligence is mostly a matter of feeding the computer enough data. AI refers to the simulation of human intelligence by computer systems, a computer that can "think" for itself. AI is a wide-ranging tool that allows us to more effectively integrate information, analyze data and use insights to improve decision-making.

Tremendous advances in artificial intelligence are happening every day, robots, autonomous cars, chatbots, spam filters, intelligent assistants (Alexa, Siri) and many others are becoming more common. Technologies are driving a new wave of economic progress, solving many of the world's most pressing problems and providing solutions to some of the most profound challenges in human history. Artificial intelligence is transforming information technology, telecommunications, transportation, healthcare, education, defense, criminal justice, banking, agriculture, all sectors of our lives. AI is used to build an intelligent computer capable of performing tasks that would only require a simulation of human intelligence. These machines are programmed to think and act like humans. Speech recognition, decision making, and visual perception are some of the features that AI would possess to be able to learn to reason and perceive like human beings without human assistance.

Will the internet start to think and will putting artificial intelligence into humanoid robots make the machines able to rebel against us? The path to superintelligence must be similar to benefit us. Superintelligence has to be the result of global cooperation, not some secret government program, or be screwed. The only way it can really work is for global cooperation to gradually develop a superintelligence based on humanity working together (but, we know there is already some superior secret government technology (like the atomic bomb). Digital technology is a kind of dominant good, it allows to those who own and control it to exert power outside traditional channels of political or legal influence. In every era, people have fought for "dominant goods," ideas, and artifacts that allow one group to dominate others. Those who own and control the most powerful digital technologies more and more will write the rules of society itself. But Facebook inciting genocide in Myanmar, Amazon continuing to upload YouTube disinformation about the pandemic, and labor abuse by the likes of Uber eroding the foundations of institutional trust, are some examples. We can only hope that is a policy in technology management guided by principles, except for crisis management.No technology is beyond the reach of human politics. There are always ways in which people can intervene and ways in which technology limits them.

No technology or innovation is capable of saving humanity from itself

We should care about who has a say in the future, the whole world is not DC or Silicon Valley, so how do we design for everyone, how does machine learning change, how do we work and what kind of work do we do. We should be concerned about the impact of AI on job losses and job creation, and what kind of benefits or income redistribution should result when AI systems replace jobs.

There are many jobs that will be lost to automation. We should be concerned about how predictive policing software radically incriminates black people over white people. It's not clear why certain insurance providers charge people of color more for coverage (there are predatory algorithms that favor predominantly white neighborhoods over others. We should be concerned about that data set and the negligence behind it obvious flaws that exist in the software are already benefits of facial recognition for the analysis of feelings and emotions and that these systems are applied to a large extent.

We should be concerned that the biggest obstacles to self-driving cars are security flaws and those flaws that are vulnerable to hackers. All of us creators, consumers, and technologists should care about how fallible machine learning is and how much users trust it and believe the results to be true without questioning it. The biggest things to worry about are the "most likely to re-offend" scores created within predictive police algorithms that are used as reinforcements for harsher sentences. This scoring was particularly viewed as supportive by judges, as it was seen as "algorithmic evidence" from a new form of technological tools that aid the justice system.

We also need to worry about how systems are designed to handle errors. What happens when a user gets stuck in a series of dead-end automated systems and the data is wrong, or a chatbot gets stuck in place? A system designed to perform specific tasks, no matter how well trained, will be inflexible with fringe covers. We should worry about computer vision not recognizing black skin, or cameras suggesting to Asian users that they "blinked" because the datasets were trained on predominantly white eyes, and this problem caused the passport website to reject the image of Asian men for passport renewal, and in such situations, users cannot intervene and solve the problem.

Viral memes created on the Internet worry about artificial intelligence technology. Experts have expressed concern that these sophisticated tools could be weaponized to spread fake news and scams or to harass people online. (for example, to show someone's face in porn videos, or a confused, or drunk face) This technology can be used to easily trick people into believing fake and fake images. Machines (robots) will never "think" like humans. There's a line between constructing algorithms to analyze patterns similar to the way the human brain analyzes the thinking of machines, and meddling in the way humans do. We should care about how the systems harm the people who use the products every day.

The rise of superintelligence will make most people better off in the next decade, but there are also concerns about how advances in artificial intelligence will affect what it means to be human, to be productive, and to exercise free will. Experts have predicted that networked artificial intelligence will increase human efficiency, but also threaten human autonomy and capabilities. AI is likely to provide us with both, great opportunities and difficult challenges. Technological advances also raise concerns about possible negative impacts on jobs, personal privacy, society, the economy, and politics. Experts have estimated that about half of our jobs will be taken over by automation and robotics in the next 15 years.

By now, artificial intelligence has made its way into our daily lives, with smartphones and Google. AI mainly teaches computers to imitate human thinking. This requires access to large amounts of information in real-time, which is a problem, and an alternative would be to get computers to simulate the human brain, not just imitate it. The process of scanning real brains and translating most of their properties into digital signals, combined with a much higher frequency of calculations in digital machines, would mean faster working of consciousness, leading to superintelligence. The coolness of artificial synapses makes neural networks work more like brains.

Superintelligence, meaning an intellect far smarter than the best human brain in virtually every area, including science, creativity, general wisdom, and social skills. The process of improving humanity's intelligence could be achieved by: increasing the efficiency of our education, improving our communication and the availability of knowledge, as well as other factors that would make humanity as a whole smarter. It is interesting to note that some of the goals that could lead to the destruction of humanity could be programmed by us.

If machine brains surpass human brains in general intelligence, then a new superintelligence could replace humans as the dominant life form on earth.

The goal of AI should be to create useful intelligence, AI systems need to be safe and secure, so legal systems need to be updated to be fairer and more efficient and to keep up with AI and manage the risks associated with AI. Artificial intelligence systems should be designed and implemented in such a way that they are compatible with the ideals of human dignity, rights, freedoms and cultural diversity.

Advanced artificial intelligence systems may represent a profound change in the history of life on earth. Artificial intelligence systems designed to self-improve in ways that can lead to rapid increases in quality and quantity must be subject to strict security and control measures. Superintelligence should be developed only in the service of widely accepted ethical ideals and for the benefit of all humanity, not for a single country or organization. Experts express concern and predict the long-term impact of artificial intelligence and new tools on the essential elements of being human and propose solutions:

  • Global good is number 1. Digital collaboration in the best interest of humanity is a top priority. People around the world to come to a common understanding and agreement to join forces to maintain control over complex human-digital networks.

  • Prioritize people - reorganize economic and political systems to better assist the 'robot human race' and threaten human relevance over programmed intelligence.

  • A value-based system - Develop policies to ensure that AI is focused on 'humanity and the common good'.

  • Locking in Dependency - Many see AI as augmenting human capacity, but some predict that humans' deepening dependence on networked machines will erode their cognitive, social and survival skills, as well as their abilities to think for themselves, take action independently of automated systems and communicate effectively with others.

  • Job loss - AI taking over jobs will widen economic divides and social upheaval.

  • Misuse of data-Most AI tools are and will be in the hands of profit-seeking companies or power-seeking governments. The use of data and surveillance in complex systems are designed for profit, or for the exercise of power. These systems are globally networked and not easy to regulate or contain.

  • Human activity- Individuals experience a loss of control over their lives.

Decision-making about key aspects of digital life is automatically ceded to code-driven black-box tools. People sacrifice independence, privacy and choice, they have to control these processes. Many experts share deep concerns and suggest paths to solutions.

Isaac Asimov (biochemistry professor and father of science fiction) believed that robots should not harm humans and that robots must obey human commands. The ethics and development of artificial intelligence could pose a potential danger to humanity (an interesting example is the recent game of chess with a robot...which broke the finger of the boy he was playing with).

The most dangerous threat to humanity is investment in the production of killer machines, military robots, killer drones. So if robotization is the future of the world, then wars between autonomous and killer robots are inevitable. An arms race with lethal autonomous weapons should be avoided by banning their production, which is almost impossible.

Perfect AI systems could succeed in consolidating their control over the world, whoever has control over the artificial systems could use them to create a perfect surveillance state (as is already happening). By reading all emails and texts, listening to all phone conversations, watching all videos and traffic cameras, analyzing all credit card transactions, and studying all online behavior, an AI system would have extraordinary insight into what people on Earth are thinking and doing. By analyzing data from mobile telephony, the system knows where most of us are at all times. With superhuman technology, the step from the perfect surveillance state to the perfect police state is very small.

Once such a totalitarian state is formed, it will be virtually impossible for people to overthrow it. Will artificial intelligence improve like never before, or will it give us more power than we can control? How can we prevent superintelligence from turning against us?

Technological progress can bring many valuable products and services for free and without government intervention. For example, anyone with Internet access gets access to many things for free, including free video conferencing, photo sharing, social media, online courses, and countless other services.

We are the guardians of the future of life, as we shape the age of artificial intelligence. Creators of superintelligence will need to be equipped to think critically about the technologies shaping the new world. There is absolutely no guarantee that we will succeed in building human-level general artificial intelligence in our lifetime.