A research duo hailing from Salus Security, a global blockchain security firm, recently unveiled their study showcasing the impressive abilities of GPT-4 in parsing and auditing smart contracts.
Artificial intelligence (AI) has proven to be proficient in generating and analyzing code, but it falls short when it comes to serving as a security auditor.
According to the research paper, the Salus researchers utilized a dataset of 35 smart contracts, known as the SolidiFI-benchmark vulnerability library, which encompassed a total of 732 vulnerabilities. The objective was to assess the AI’s capacity to identify potential security flaws across seven prevalent vulnerability types.
Related:
Report reveals a sharp 85% decline in crypto losses due to BNB Chain heists in 2023
Their findings indicated that ChatGPT excels in detecting true positives, which are actual vulnerabilities deserving of investigation beyond a controlled testing environment. The precision rate exceeded 80% during testing.
However, there appears to be a noticeable issue with generating false negatives. This is elucidated through a metric known as the “recall rate.” In the Salus team’s experiments, GPT-4 demonstrated a recall rate as low as a mere 11%, with higher values being more desirable.
Consequently, the researchers concluded that “GPT-4’s vulnerability detection capabilities are inadequate, with the highest accuracy reaching only 33%.” Therefore, they recommend relying on dedicated auditing tools and traditional human expertise for auditing smart contracts until AI systems like GPT-4 can catch up.
ChatGPT: An Impressive Smart Contract Writer, But Not Suitable as a Security Auditor
No Comments2 Mins Read
Related Posts
Add A Comment