Search
  • AI Expert

Examples of Artificial Intelligence Risks

Updated: Feb 20


AI technology unpredictable improvements can threat Humanity?


Artificial intelligence today, is known as narrow AI technology (or weak AI) that is designed to perform a narrow task (e.g. facial recognition, internet searches, or simply driving a car). However, the long-term goal of many researchers is to create general AI. While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.


Approaches to AI technology

Here are the four different approaches that have historically defined the field of AI:

  • Thinking humanly

  • Thinking rationally

  • Acting humanly

  • Acting rationally

Businesses that aim to develop their in- house AI solutions, are unable to do it so because of the lack of adequate AI skills. Individuals that are highly skilled in AI are hard to come by, and this poses a problem for the industry, as more and more AI companies and job opportunities are cropping up.


Therefore, finding talented individuals is getting harder, as a result of the rapid change of rate and low barrier entry, so the only way this challenge can be addressed is having organizations establish relationships with universities. This gives room for students to engage in many learning projects, get mentored and for organizations to tap into emerging gurus at an early stage prior to entering the workforce.


To have an extremely successful AI-driven strategy, the organization must have a strong IT infrastructure. Since AI technology processes big data, it must have high performing hardware installed to do the task successfully. Therefore, it is very important for computer systems to run smoothly and function properly, especially for smaller organizations with average IT budgets.


The organizations must feed the systems with high-quality data to effectively utilize AI because poor quality data would lead to substandard results from the AI software. Organizations are increasing the amount of data they are collecting because of the emerging Big Data world.


Although, the data often collected is either irrelevant and mostly not the right kind, so that hinders the success of an AI strategy. To ensure top-notch AI results, stakeholders must ensure that their existing data sets are thoroughly cleaned and ensure only the collection of high-quality data.


AI’s emergence in the industry is quite encouraging. The problem with its emergence though, is how insecure user data has become closely accompanied by the risk of this data falling into the wrong hands.


Cumbersome penalties come with the breach of either of the laws listed above, as a result, well-informed managers have successfully decamped to a consent-based approach of doing business. This would help them maintain their long-term relationships with their clients, as they understand that it cannot be jeopardized.

It is very important that organizations must know what data to collect and understand why. This saves them from a lot of trouble, and most importantly, protects them from the ordeal of ever being penalized.


Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:



Scenario 1. AI technology is programmed to do something devastating:

Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties.

To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI but grows as levels of AI intelligence and autonomy increase.


Scenario 2. AI technology is programmed to do something beneficial, but it develops a destructive method for achieving its goal:

This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.


As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants


List of dangerous implications that AI may bring into our lives:

Remote-controlled car crashes

The biggest concern involves AI being used to carry out physical attacks on humans, such as hacking into self-driving cars to cause major collisions.


Sophisticated phishing

In the future, attempts to access sensitive and personal information from an individual could be carried out by AI almost entirely. If most of the research and message generation typical of a phishing scam could be handled by AI, more people would be duped by this activity.

AI could impersonate people’s real contacts, using a writing style that mimics the style of those contacts, making it harder to spot the scam.


Manipulating public opinion

Well-trained bots could create a strategic advantage for political parties and almost work as artificially intelligent propaganda machines that thrive in low-trust societies, the report claims.

This goes further than just the spread of fake text content. "AI systems can now produce synthetic images that are nearly indistinguishable from photographs, whereas only a few years ago, the images they produced were crude and obviously unrealistic.


Autonomous weapons

AI programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, is one way AI can pose risks.


Invasion of privacy and social grading

It is now possible to track and analyze an individual's every move online as well as when they are going about their daily business.


Misalignment between our goals and the machine’s

Part of what humans value in AI-powered machines is their efficiency and effectiveness. But, if we aren’t clear with the goals we set for AI machines, it could be dangerous if a machine isn’t armed with the same goals we have.


Social manipulation

Social media through its autonomous-powered algorithms is very effective at target marketing. They know who we are, what we like and are incredibly good at surmising what we think.


Discrimination

Since machines can collect, track, and analyze so much about you, it’s very possible for those machines to use that information against you.

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security, and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.


In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.

By inventing revolutionary new technologies, such as super-intelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last unless we learn to align the goals of the AI with ours before it becomes super-intelligent.


There are some who question whether strong AI will ever be achieved, and others who insist that the creation of super-intelligent AI is guaranteed to be beneficial. At FLI, we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe, research today will help us better prepare for and prevent such potentially negative consequences in the future; thus, enjoying the benefits of AI while avoiding pitfalls.


The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime.


While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.


Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us.


The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?






42 views