Understanding the Risks Associated with AI

AI can benefit us in many ways, from simplifying the tasks to personalizing the experience. However, technology also comes with inevitable trade-offs. AI is believed to threaten cyber security; eat specific jobs; create a new weapon, and cause legal complexities.

Here we will go through the key concerns associated with ARTIFICIAL INTELLIGENCE. 

A Threat to Traditional Jobs:

Are jobs under threat with AI? Machines are notorious for eating up the jobs in the past. AI can automate predictable and repetitive tasks. For example, the technology can replace human interference in mundane tasks such as form filling, calculation, personalization, and general instructions used in customer support. Drones and robots are gradually delivering couriers.

Even robots can be deployed in the war soon, which will minimize human causalities. Robo-surgeons can conduct various medical operations, starting with vision correction to knee replacement surgery. Robot security guards are being installed in the buildings which are equipped with a directional microphone and infrared sensor, minimizing the need for extra security staff. Robo-callers can analyze sales calls quicker than their human counterpart. Finance jobs might soon be taken over by AIs, which can computer and assess data faster. However, it can’t be denied that human interference can’t be wholly overlooked in such operations. For example, it requires someone to operate and monitor the jobs being done by robots. Whereas AI can automate some tasks, it is likely to create some new jobs as well.

Security Concerns:

With more systems, get AI-enabled, considering how adversaries might exploit them to become more crucial. For example, threats can interfere with inputs or data, or potentially find ways to get insights about training data by assessing the output of specially customized test inputs. AI systems can be confused by adversaries as it can be tricked into misclassifying data. By leveraging the ways an AI system processes data, a threat actor can lure it into seeing something that doesn’t exist there. For example, a system can be confused into thinking that an image or apple is instead an image of an orange. It’s a way of creating an optical illusion for a machine learning system. Model inversion is also a popular type of attack in which cybercriminals can reverse-engineer the AI to view the data that was used to train it.  If you find that your business security at risk you can get in  touch with IT consultant in Singapore

An Era of AI Weapons:

AI can be used to launch missiles without human intervention. And the enemy can manipulate data to alter the route of the rockets. AI weapons can select and hit targets sans the role of humans. Here is an excerpt of the open letter by the International Joint Conference on Artificial Intelligence (IJCAI) in 2015: “Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quad copters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Transparency is Not Ensured:

Many AI systems were created using neural networks that are complex interconnected node systems. But these systems are less capable of showing their “intention” for decisions. They show you nothing beyond the input and the output, making them complicated. This thing does matter as it is essential to see the specific data that lead to certain decisions. What thought or logic leads to such output? What information was involved in the training of the model? How does the model behave? All these things are not transparent yet.

Unclear Liability for Actions:

A human can be prosecuted. But that’s not the case with a machine. What are the terms of liability when the AI system makes an error or causes an accident? Operating companies are likely to blame the machine’s functionality for saving their skin. It is not clear if the company can be held responsible for an algorithm learned by the device.

For the detailed Information about AI Please refer to our mainstream Blog

So these are some risks that are likely to be caused by AI. However, the good or evils of AI can be determined by the way it is used as well as the intentions.

What do you think? Please let us know by commenting below.

About Ng Wei Khang

Ng Wei Khang, CEO at a Singapore based IT consultancy company, APIXEL IT Support, has been operational for over 8 years from now. Apixel provides fixed price IT Support Services with unlimited packages that includes small business server setup, Cloud Solution Configuration, Network management and Data security & Theft prevention. The company also provides expert IT consultancy to SMEs in Singapore.

View all posts by Ng Wei Khang →

Leave a Reply

Your email address will not be published. Required fields are marked *