Using Fraim to Find and Fix Vulnerabilities
Walkthrough with me as I find and fix a vulnerability in an npm package
Last week while perusing HackerNews I came across this blog post comparing a bunch of AI SAST tools in the market today. In a section at the end, the author explores a 0-day exploit in the image-size npm package (with 14.5 million weekly downloads!) that all of the tools failed to detect. Naturally, I wanted to see how well Fraim’s “code” workflow would do!
You can see the results from the run here, and if you’d like to run it for yourself, you can check out our docs.
Analysis
Let’s go over the results one-by-one, we’ll discuss what Fraim gets right, and also the cases that it misses.
DoS in icns.ts
Let’s start with the best finding, an Infinite Loop that has the potential for a DoS attack. Here is the code:
while (imageOffset < fileLength && imageOffset < inputLength) {
      const imageHeader = readImageHeader(input, imageOffset)
      const imageSize = getImageSize(imageHeader[0])
      images.push(imageSize)
      imageOffset += imageHeader[1]
}
The infinite loop happens because if the value of imageHeader[1] is 0, the imageOffset never iterates, resulting in an infinite loop. A simple oversight to make, but very dangerous. If an upstream user of this library was trying to get the image size of a file that was uploaded by a user, there service would be prone to a DoS attack.
I honestly wasn’t sure how easy it would be to reproduce this (i.e., how do I forge the header to set the offset to 0). So, I conceded to ask our AI overlords. To make it easy I just copy/pasted the “Explanation” from Fraim and told the LLM to write a test that reproduced this vulnerability. It produced a valid test, and I was able to validate the vulnerability without having to figure out how to write the byte array by myself.
Now the easy part, fixing it. I again prompted an LLM to write a fix for me, but it gave me something rather complicated. I ended up on a one liner instead:
imageOffset += imageHeader[1] > 0 ? imageHeader[1] : 8
This finding validated pretty much every one of my convictions about using AI in a security context. Within 5 minutes, I went from not knowing anything about this library, to finding a vulnerability (that I did not really understand at first glance), to validating said vulnerability (which helped me understand it better), and then writing a fix.
Insecure Design in ico.ts
Fraim found two separate issues that it classified as warnings in ico.ts. The description of them were:
- “Out-of-bounds read in ICO parser due to missing bounds checks on directory offsets.”
 - “Unbounded loop controlled by untrusted input (image count) enables uncontrolled resource consumption.”
 
Let’s look at the corresponding code to see if we can validate these two findings.
The first finding comes from the following function:
function getSizeFromOffset(input: Uint8Array, offset: number): number {
  const value = input[offset]
  return value === 0 ? 256 : value
}
Since offset is not validated here (or anywhere), we have a bug where the offset could exceed the provided input, causing an out-of-bounds error. Luckily, in Javascript this will just end up returning “undefined”, instead of actually crashing the application. So in this case, a warning is fair, it’s not a security vulnerability in and of itself, but it’s still a bug that should be fixed.
Now for the second finding, here is the relevant snippet:
  calculate(input) {
    const nbImages = readUInt16LE(input, 4)
    const imageSize = getImageSize(input, 0)
    if (nbImages === 1) return imageSize
    const images: ISize[] = []
    for (let imageIndex = 0; imageIndex < nbImages; imageIndex += 1) {
      images.push(getImageSize(input, imageIndex))
    }
    return {
      width: imageSize.width,
      height: imageSize.height,
      images: images,
    }
  }
Similar to what we found in icns.ts, we can manipulate the value inside nbImages to be rather large. In this case, it does not result in an infinite loop, but can result in creating a much larger array than expected. Luckily an attacker is constrained by nbImages only consisting of two bytes, so the maximum possible number of iterations is 65535. Still, not ideal, and if they’re able to bulk upload these images it could cause problems. All things considered though, I think Fraim did a good job labeling this as a warning and not an error (which would signal need for an immediate fix).
Insecure Design in gif.ts
This file also had 2 warning level findings:
- GIF.validate reads 6 bytes for the GIF signature without checking input length.
 - GIF.calculate reads width/height at fixed offsets (6 and 8) without checking input length, enabling a denial-of-service via truncated GIF headers.
 
Honestly both of these are fairly low priority in my opinion. Yes, the validate function should be checking the input length, but it will just return false if the result of toUTF8String(input, 0, 6) is undefined, which in this case would be the correct response.
As for the calculate issue, this one could definitely result in upstream bugs, but it will just return undefined for the corresponding height or width, so not as concerning. Though, it could still result in a crash if the upstream has not handled it properly.
The Bad
So there were 5 findings that Fraim found, they were mostly legit reports, the main huge find being the DoS attack that other tools did not detect.
What about the bad? I’ll list out a few snags we ran into while running this experiment:
- Fraim failed to detect similar DoS infinite loop errors in both 
heif.tsandjxl.ts(both of which were reported by Joshua Hu here). - Vibecoding a fix to the issue in 
icns.tswas unusable. It didn’t read well, and messed up on the logic. - When running Fraim with different models, we saw some wildly different results. I was running with gpt-5, however when running with Gemini 2.5 Pro we got 17 errors, most of which were false positives or bad validations, which should have been warnings at most. We ran again with gpt-5 and only got 2 results, none of which were DoS related.
 
Conclusion
The amount of ground we can cover in finding and fixing vulnerabilities is dramatically increased by using AI tools. This is the superpower of AI, and these are the cases we need to be leveraging hard, because attackers have the same tools we do, and if we can find vulnerabilities this easily, so can they. Now more than ever it’s important to be proactive about fixing potential issues, as they will be found and exploited faster than ever.