Type Here to Get Search Results !

Prevent the use of Deepfakes for web development

Prevent the use of Deepfakes for web development

Cybercrime has risen sharply this year.From July 2020 to June 2021, almost 11 times more In the ransomware attack, we found out. And this number continues to grow. But the next challenge is not just an increase in the number of attacks. We have also seen an increase in attacks on well-known targets—and the rise of new methods.

Deepfakes and deep attacks
Deepfakes really started to become prominent in 2017, mainly for entertainment purposes.Two examples are people creating social media memes by inserting Nicholas Cage into movies or recent movies that he didn’t actually participate in Anthony Bourdain documentary, It uses deepfake technology to imitate the voice of the late celebrity chef. There are also beneficial use cases for deep forgery technology in the medical field.

Unfortunately, once again, the maturity of deepfake technology has not been ignored by the bad guys. In the field of network security, deepfakes are getting more and more attention because they use artificial intelligence to mimic human activities and can be used to enhance social engineering attacks.

GPT-3 (Generative Pre-trained Transformer) is an artificial intelligence-based system that uses deep language learning to create e-mails that read naturally and are very persuasive. With it, an attacker can use stolen email addresses by destroying the mail server or running a man-in-the-middle attack to generate emails and email replies that mimic the writing style, word choice and tone of the impersonated person. This may include managers or executives, or even refer to previous communications.

tip of the iceberg
Creating an email is just the beginning. Software tools that can clone someone’s voice already exist on the Internet, and other tools are under development. It only takes a few seconds of audio to create someone’s voice fingerprint, and then the software generates any voice in real time.

Although still in the early stages of development, as central processing unit (CPU)/graphics processing unit (GPU) performance becomes more powerful and cheaper, deepfake video will become a problem. Through the commercialization of advanced applications, the threshold for creating these deep forgeries will also be lowered. These may eventually lead to real-time simulations of voice and video applications that can be analyzed by biometrics. The possibilities are endless, including removing voiceprints as a form of authentication.

Counterfit is an open source tool and a symbol of hope. The newly released tools enable organizations to perform penetration tests on AI systems (including facial recognition, image recognition, fraud detection, etc.) to ensure that the algorithms used are trustworthy. They can also use this tool to perform red/blue war games. We can also expect attackers to do this and use this tool to identify vulnerabilities in artificial intelligence systems.

Take action on deepfakes
As these proof-of-concept technologies become mainstream, security leaders will need to change the way they detect and mitigate attacks. This will definitely include winning with fire-that is, if the bad guys use artificial intelligence as part of their offense, then the defender must also use it. One such example is the use of artificial intelligence technology that can detect minor voice and video anomalies.

Our current best defense measure is zero-trust access, which restricts users and devices to a set of predefined assets, segmentation and integrated security policies, designed to detect and limit the impact of attacks.

In addition to requests from emails, we also need to improve end user training to include how to detect suspicious or unexpected requests arriving via voice or video. For spoofed communications including embedded malware, companies need to monitor the traffic to detect the payload. This means having a device fast enough to check streaming video without affecting the user experience.

Fight Deepfakes now
Almost every technology will become a double-edged sword, and artificial intelligence-driven deepfake software is no exception. Malicious actors are already using artificial intelligence in many ways-and this situation will only expand. In 2022, note that they use deep forgery to mimic human activities and launch enhanced social engineering attacks. By implementing the above recommendations, organizations can take proactive measures to maintain security even in the presence of these sophisticated attacks.

Read More..

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Top Post Ad

Below Post Ad