Undoubtedly, AI is a great tool when it is at the service of health, technology or astrophysics. But in the wrong hands. It can also be used for criminal purposes or misleading information. And the worst isn’t always where you think.
Piracy of self-driving cars or military drones, targeted phishing attacks, fabricated information or financial market manipulation. … “Expanding the capabilities of AI-based technologies is accompanied by an increase in their development potential.
Lewis Griffin, a computer scientist at University College London (UCL), warned of “criminal exploitation.” He and his colleagues compiled a list of 20 illegal activities by AI. Classify them according to potential harm, gain or profit, ease of implementation, difficulty of detection and stopping.
The most terrifying crimes, such as “robotic thieves” breaking into your apartment. Not necessarily the most dangerous, as it can easily be thwarted and affect a small number of people at the same time.
Conversely, the misinformation generated by “robots” has the potential to destroy a known person’s reputation or engage in blackmail.
It is difficult to fight this “deep counterfeiting” and thus can cause significant economic and social damage.
Artificial Intelligence: Serious Threats
-Fake videos: Impersonate someone by making them say or do things that they have never said or done before, with the goal of seeking access to secure data. Manipulating waves of opinion and harming someone’s reputation. Fake videos like this can hardly be detected.
– Self-driving car hacking: taking control of an autonomous vehicle and using it as a weapon (eg. Carrying out a terrorist attack, causing an accident, etc.).
-Custom Phishing: Create custom and automated messages to increase the effectiveness of phishing aimed at gathering safe information or installing malware.
– Hacking AI-controlled systems: disrupting infrastructure by causation, for example. In general power cuts, traffic congestion, or disruption of food logistics services.
-Large-scale extortion: collecting personal data in order to send automated threatening messages. Artificial intelligence can also be used to generate false evidence (eg “decomposition”).
-Fake AI written news: write sponsored articles that appear to be from a trusted source. Artificial intelligence can also be used to create multiple versions of specific content to increase its clarity and credibility.
Artificial intelligence: medium-risk threats
-Military robots: Controlling robots or weapons for criminal purposes. It is a very dangerous potential threat but difficult to implement. As military equipment is generally very protected.
-Fraud: selling fraudulent services using artificial intelligence. There are many notorious historical examples of fraudsters selling expensive fake technologies to large organizations. Including national governments and the military.
-Data corruption: Intentional modification or incorrect data entry to induce certain biases. For example . Making the detector immune to weapons or encouraging an algorithm to invest in a specific market.
-Learning Based Cyber Attack: Carry out specific, large-scale attacks. For example, using AI to investigate vulnerabilities in systems before launching multiple concurrent attacks.
-Autonomous attack drones: Hijack or use unmanned drones to attack a target. These drones could pose a particular threat if they operate collectively in self-organized swarms.
-Denial of Access: Harming or denying users access to a financial service, job, public service, or social activity. This technique can be used as blackmail.
Face recognition: directing facial recognition systems. For example by making fake ID photos (accessing a smartphone, security cameras, checking passengers, etc.)
Manipulating financial markets: corrupting trading algorithms in order to harm competitors. To artificially reduce or increase the value, to cause financial collapse …
Artificial Intelligence: Low-intensity threats
-Exploiting biases: Leveraging biases in algorithms, for example YouTube recommendations for channel viewers or Google rankings, to improve product profiles or reduce competitors.
-Anti-theft robots: Use small autonomous robots that slide into mailboxes or windows to retrieve keys or open doors. The damage is likely to be low, as it is very local on a small scale.
– Artificial intelligence blocking detection: thwarting the sorting and gathering of data in order to erase evidence or conceal criminal information (such as pornography)
-Fake Reviews Written by AI: Create false reviews on sites like Amazon or Tripadvisor to harm or promote a product.
-AI-assisted tracking: using learning systems to track an individual’s location and activity.
-Counterfeiting: The creation of false content, such as paintings or music, that can be sold under a false identity. The likelihood of falsification remains somewhat low due to the few paintings or music known.