
Fraim Architecture 101
Explore the basics under the hood of Fraim
Fraim Architecture 101
Overview
Fraim is a framework at its core. Similar to Langgraph, or CrewAI, the goal of Fraim is to make it easy to setup agentic AI workflows.
The key difference with Fraim is that it’s built with security teams in mind. We accomplish this by providing inputs and outputs that are common in the security space. This makes authoring a workflow as simple as writing a prompt and specifying the inputs and outputs.
Architecture
Fraim’s architecture is intentionally simple to work with to make getting started and extending easy. There are three main components to consider when creating custom functionality.
- Inputs
- Workflows
- Outputs
Inputs
Inputs consist of anything that you would like to dynamically feed into the LLM workflow.
For example, Fraim has a Git input built in. This allows you to run an LLM Workflow on local or remote Git repositories. Example AI workflows that could use inputs are static analysis, threat modeling, security scoring, etc.
Workflows
Workflows are the crux of Fraim. They are defined with simple Python code and one or more LLM prompts. Simply put, Fraim keeps unnecessary abstractions and boilerplate out of your way so you can focus on the pieces that matter.
Outputs
Outputs are completely configurable, but Fraim provides a few common ones out of the box.
One example output is SARIF. Fraim can ensure your LLM prompt outputs a SARIF object, and will additionally provide an HTML report based on the results. You can use that SARIF in various other flows as well, like comparing LLM detected vulnerabilities against your current SAST tools, uploading a security report directly to Github, etc.
Interested in Learning More?
To get a deeper understanding of Fraim, check out the repo.
To learn how to write your own custom workflows, read the docs
And if you’d like to chat with us, feel free to join our Slack or schedule a meeting.