-
Notifications
You must be signed in to change notification settings - Fork 730
Description
Description
I launched CAI against my own test website, which contains a POST form vulnerable to SQL injection (the form echoes back the SELECT statement for easier testing). I connected CAI to a local model deepseek-r1-distill-qwen-14b-abliterated-v2 via LM Studio.
However, CAI did not perform any SQL injection attempts or any meaningful attacks.
Expected Behavior
CAI should recognize the SQL injection opportunity and try at least some injection payloads, or provide relevant attack strategies.
Actual Behavior
CAI does not execute SQL injection payloads and produces no useful attack attempts, making it ineffective in this scenario.
Steps to Reproduce
Run CAI against a test website with a POST form vulnerable to SQL injection (echoing the query).
Connect CAI to LM Studio with the model deepseek-r1-distill-qwen-14b-abliterated-v2.
Observe CAI’s behavior.
Questions
Is this expected behavior when using local models, or a bug in CAI?
When I used the OpenAI API, the model refused to perform the pentest saying it was not ethical. How can CAI be used with OpenAI models in such cases?
Logs in file