A new HackerOne review revealed new insights into the growing cybersecurity issues AI poses. The statement drew insight from 500 safety authorities, a community survey of 2, 000 people, comments from 50 customers, and anonymized program data.
Their main issues with AI were:
- Leaked training data ( 35 % ).
- Unauthorized usage ( 33 % ).
- outsiders ‘ hacking of AI models ( 32 % )
Additionally, according to the survey, 48 % of employees believe that AI poses the greatest security risk to their business. These worries emphasize the need for businesses to review their AI safety practices before flaws become real dangers.
How did protection study evolve with the development of AI.
According to the HackerOne statement, AI has a potential threat, and security has been working to combat it. Among those surveyed, 10 % of safety experts specialize in AI. In reality, 45 % of security officials consider AI among their businesses ‘ greatest challenges. Data dignity, in certain, was a problem.
” AI is actually hacking another AI types”, said Jasmin Landry, a safety scholar, and HackerOne pentester, also known as @jr0ch17, in the document.
51 % of respondents claim that as businesses move to incorporate generative AI, simple security procedures are being ignored. Only 38 % of HackerOne users were confident in their ability to defend against AI threats.
Logical errors and LLM fast injection are the most frequently reported AI flaws.
Over the past year, HackerOne’s security system has seen a 171 % increase in the number of AI resources enrolled in its programs.
What are the most frequently identified flaws in AI goods:
- General AI safety ( such as preventing AI from generating harmful content ) ( 55 % ).
- Business logic errors ( 30 % ).
- LLM prompt injection ( 11 % ).
- Data poisoning from LLM training ( 3 % ).
- ( 3 % ) disclosure of LLM sensitive information.
HackerOne emphasized the value of individual factors in preventing AI from entering systems and keeping these resources comfortable.
” Even the most sophisticated automation ca n’t match the ingenuity of human intelligence”, said Chris Evans, HackerOne CISO and chief hacking officer, in a press release. The 2024 Hacker-Powered Security Report demonstrates how crucial mortal skill is to addressing the special challenges that AI and another cutting-edge systems pose.
Notice: For the second quarter in a column, executives are more worried about AI-assisted assaults than any other danger, Gartner reported.
Outdoor AI, cross-site programming problems occur the most
Some things have n’t changed: Cross-site scripting ( XSS) and misconfigurations are the weaknesses most reported by the HackerOne community. The best methods for identifying issues are insertion testing and bug bounty testing, according to the respondents.
Security clubs have a tendency to use AI to produce false positives.
According to further study from a September HackerOne-sponsored SANS Institute report, 58 % of security experts think security team and threat stars could work with generative AI strategies and methods in an “arms race.”
In the SANS survey, security professionals ( 71 % ) reported using AI to automate time-consuming tasks successfully. The exact individuals acknowledged, however, that threat actors may use AI to improve their operations. In particular, respondents “were most concerned with AI-powered phishing campaigns (79 % ) and automated vulnerability exploitation ( 74 % )”.
Notice: Security leaders are getting discouraged with AI-generated script.
According to Matt Bromiley, an researcher at the SANS Institute,” Security team must find the best uses for AI to keep up with enemies while also taking into account its current limitations,” or they run the risk of producing more work for themselves.
The option? Applications of AI may be reviewed externally. The most effective method to identify AI safety and security issues is “external review,” according to more than two-thirds of those surveyed ( 68 % ).
” Groups are now more practical about AI’s present restrictions” than they were last month, said HackerOne Senior Solutions Architect Dane Sherrets in an email to TechRepublic. Humans provide both defensive and offensive protection with a lot of crucial framework that AI has yet to learn. Teams have also become less hesitant to use the systems in crucial systems due to issues like illusions. However, AI is still great for increasing productivity and performing tasks that do n’t require deep context”.
More studies from the SANS 2024 AI Survey, released this month, include:
- In the future, 38 % of their safety strategies intend to use AI.
- 38.6 % of respondents said they had issues with using AI to identify and prevent cyber challenges.
- 40 % of people object to implementation of AI because of its legal and social implications.
- 41.8 % of companies have faced pushback from people who do not believe AI judgments, which SANS says is “due to absence of transparency”.
- Businesses now employ It in their protection strategy in 43 % of cases.
- Anomaly detection systems ( 56 % ), malware detection ( 50 % ), and automated incident response ( 48 % ), are the most frequently used in security operations using AI technology ( 56 % ).
- De attributes this to a lack of coaching information, which 58 % of respondents said AI systems struggle to identify new risks or respond to outcast indicators.
- 71 % of those who complained about how AI was unable to find or deal with cyber threats said AI had a tendency to produce false positives.
HackerOne’s recommendations for improving AI safety
HackerOne recommends:
- Often testing, confirmation, identification, and assessment throughout an AI woman’s life cycle — from training to deployment and use.
- Establishing an Artificial governance framework and determining whether federal or industry-specific AI conformity requirements are pertinent to your business.
HackerOne also reaffirmed the importance of organizations offering training on social and security issues as well as open communication about relational AI.
In September, HackerOne released some study information, as well as the full report. Both are taken into account in this updated post.