How do voice cloning apps work

New phone scams 3.0 using Artificial Intelligence

Artificial Intelligence voice cloning is only one of several tools that AI developers have been working on 2023. The Chinese company ByteDance launched SteamVoice, a tool capable of changing the user voice     live and with great quality. They are also behind TikTok, so it’s a great improvement for the short video social network app. But Artificial Intelligence also brings danger and new phone scams into our world.

The danger with StreamVoice and similar tools is that you can recreate almost any voice digitally. So the hackers may try to trick you by becoming a friend or relative on the phone. It won’t we immediately, but having the software so easily at hand is worrisome.

New phone scam using Artificial Intelligence to imitate human voices

Artificial Intelligence phone scams will increase in the following years

According to researchers from Innotec Security Part of Accenture, in the next years the use of Artificial Intelligence for phone scams will increase noticeably. The main reason behind this is the improvement in voice imitation skills in AI apps. Just a little fragment of a conversation is enough for hackers to develop full conversations thanks to AI tools.

Imagine that a hacker records your voice and uses it with other friends or relatives. The identification of a false voice won’t be easy, and frauds and scams will increase significantly. In order to prevent this new phone scams with Artificial Intelligence there are several aspects that need improvement. For example, the writing of new laws and regulations involving AI tools for daily tools. There’s also a need for new tools and technology for security agencies involving cybernetic crimes.

How to detect a phone scam using Artificial Intelligence voice tools?

The voice imitation AI tools may create a suspicious society. You will have to ask personal questions to each person who calls you in order to determine if they are who they say.  But technological companies such as Apple or Google may work to improve security regarding AI tools. One option is the development of deepfake recognition algorithms. It’s a good opportunity to protect not only users but also their products and services. Governments and telecommunication companies can also start using similar tools to prevent misinformation as well as fraud.

When the phone rings

If hackers can imitate the voice of friends and relatives, the most effective security barrier is you.  Only through your own ears will you be able to determine if the call is real or not. Some advices in this regard include:

  • Be suspicious if the call comes from a strange number.
  • Pay attention to the voice looking for robotic or lifeless expressions.
  • If the voice doesn’t pause it’s also a deepfake sign.
  • 2 or 3 second pauses when you ask a question can also indicate a fake IA tool imitating a voice.
  • The voice tries to hurry you to make a financial movement.

Common sense is the main ally you will find when detecting a false voice call using Artificial Intelligence. In the near future this kind of frauds will increase but it’s very important for users to be cautious and use common sense when talking. Ask personal questions and confirm that the voice belongs to the person before giving any personal data or transferring money.


Leave a Comment