Generative AI used to increase social engineering attacks by 135%

It is reported that according to the latest research report released by the network security company Darktrace, attackers use generative AI such as ChatGPT to increase the number o

Generative AI used to increase social engineering attacks by 135%

It is reported that according to the latest research report released by the network security company Darktrace, attackers use generative AI such as ChatGPT to increase the number of social engineering attacks by 135% by adding text descriptions, punctuation mark and sentence length.

Report: ChatGPT and other generative AI led to a 135% increase in phishing email attacks

In the digital age, cybercriminals are finding new ways to exploit people’s information online. Social engineering attacks, which manipulate individuals into divulging sensitive information or performing specific actions, have risen significantly in recent years.
According to a report by network security company Darktrace, attackers have found a new tool to make their activities more effective: generative AI. The report shows that attackers use AI tools such as ChatGPT to add text, punctuation marks, and sentence length, which increase the number of social engineering attacks by up to 135%.

What is generative AI?

Generative AI, also known as deep learning, is a type of AI that can create new, original content. It uses neural networks to learn the patterns and structure of a dataset, then uses that knowledge to generate new content that is similar to the original.
Generative AI is becoming more sophisticated and is being used in a wide range of applications, including:
– Language translation
– Content generation
– Image and video creation
– Music composition
– Dialogue creation

How generative AI is used in social engineering attacks

Social engineering attacks often rely on the manipulation of language to deceive the victim. Attackers use generative AI to make their messages more convincing and harder to detect.
By adding punctuation marks, sentence length, and text descriptions, the messages generated by the AI tools look more like legitimate communications. This allows the message to bypass spam filters and phishing detectors, making social engineering attacks more successful.
The report by Darktrace highlights the use of ChatGPT, an AI tool that can generate convincing messages. ChatGPT is an open-source language model that can be trained on large datasets to generate high-quality content. It is often used for chatbots and conversational agents but is now being used by cybercriminals to create convincing social engineering attacks.

The impact of generative AI on cybersecurity

Generative AI poses a significant threat to cybersecurity. It allows attackers to create unique and convincing attacks that are difficult to detect. As AI models become more sophisticated, they will be able to create even more convincing attacks.
The use of generative AI in social engineering attacks is increasing, but solutions are being developed to combat this threat. Machine learning algorithms are being used to detect and prevent this type of attack.
There are also steps individuals can take to protect themselves from social engineering attacks, including:
– Being cautious of unsolicited communications
– Not clicking on links in unsolicited emails or messages
– Being aware of common social engineering tactics, such as urgency or the promise of reward

Conclusion

Generative AI is a powerful tool that can be used for good or evil. Unfortunately, cybercriminals are finding ways to use this technology for malicious purposes. Social engineering attacks are becoming more sophisticated, and individuals and organizations must take steps to protect themselves.
Developing effective solutions to combat social engineering attacks is a critical priority for network security companies. The use of generative AI in these attacks underscores the need for continued innovation in the field of cybersecurity.

FAQs

1. What is social engineering?
– Social engineering is the use of deception to manipulate individuals into divulging sensitive information or performing specific actions.
2. How does generative AI make social engineering attacks more effective?
– Generative AI is used to add text, punctuation marks, and sentence length to messages, making them more authentic and convincing.
3. How can individuals protect themselves from social engineering attacks?
– Being cautious of unsolicited communications, not clicking on links in unsolicited messages, and being aware of common social engineering tactics are some ways individuals can protect themselves from these attacks.

This article and pictures are from the Internet and do not represent Fpips's position. If you infringe, please contact us to delete:https://www.fpips.com/12953/

It is strongly recommended that you study, review, analyze and verify the content independently, use the relevant data and content carefully, and bear all risks arising therefrom.