AIsbom is a specialized security and compliance scanner for Machine Learning artifacts.
Unlike generic SBOM tools that only parse requirements.txt, AIsbom performs Deep Binary Introspection on model files (.pt, .pkl, .safetensors) to detect malware risks and legal license violations hidden inside the serialized weights.
Install directly from PyPI. No cloning required.
Note: The package name is aisbom-cli, but the command you run is aisbom.
Point it at any directory containing your ML project. It will find requirements files AND binary model artifacts.
aisbom scan ./my-project-folderYou will see a combined Security & Legal risk assessment in your terminal:
🧠 AI Model Artifacts Found
| Filename | Framework | Security Risk | Legal Risk |
|---|---|---|---|
bert_finetune.pt |
PyTorch | 🔴 CRITICAL (RCE Detected: posix.system) | UNKNOWN |
safe_model.safetensors |
SafeTensors | 🟢 LOW (Binary Safe) | UNKNOWN |
restricted_model.safetensors |
SafeTensors | 🟢 LOW | LEGAL RISK (cc-by-nc-4.0) |
A compliant sbom.json (CycloneDX v1.6) including SHA256 hashes and license data will be generated in your current directory.
Don't like reading JSON? You can visualize your security posture using our offline viewer.
- Run the scan.
- Go to aisbom.io/viewer.html.
- Drag and drop your
sbom.json. - Get an instant dashboard of risks, license issues, and compliance stats.
Note: The viewer is client-side only. Your SBOM data never leaves your browser.
AI models are not just text files; they are executable programs and IP assets.
- The Security Risk: PyTorch (
.pt) files are Zip archives containing Pickle bytecode. A malicious model can execute arbitrary code (RCE) instantly when loaded. - The Legal Risk: A developer might download a "Non-Commercial" model (CC-BY-NC) and deploy it to production. Since the license is hidden inside the binary header, standard tools miss it.
- Pickle files can execute arbitrary code (RCE) instantly upon loading.
- The Solution: Legacy scanners look at requirements.txt manifest files but ignore binary model weights. We look inside. We decompile the bytecode headers without loading the heavy weights into RAM.
- 🧠 Deep Introspection: Peeks inside PyTorch Zip structures and Safetensors headers without loading weights into RAM.
- 💣 Pickle Bomb Detector: Disassembles bytecode to detect
os.system,subprocess, andevalcalls before they run. - ⚖️ License Radar: Extracts metadata from .safetensors to flag restrictive licenses (e.g., CC-BY-NC, AGPL) that threaten commercial use.
- 🛡️ Compliance Ready: Generates standard CycloneDX v1.6 JSON for enterprise integration (Dependency-Track, ServiceNow).
- ⚡ Blazing Fast: Scans GB-sized models in milliseconds by reading headers only and using streaming hash calculation.
Security tools require trust. To maintain a safe repository, we do not distribute malicious binaries. However, AIsbom includes a built-in generator so you can create safe "test dummies" to verify the scanner works.
1. Install:
2. Generate Test Artifacts: Run this command to create a fake "Pickle Bomb" and a "Restricted License" model in your current folder.
# Generate a mock Pickle Bomb (Security Risk) and a mock Non-Commercial Model (Legal Risk)
aisbom generate-test-artifactsResult: Files named mock_malware.pt and mock_restricted.safetensors are created.
3. Scan it:
# You can use your globally installed aisbom, or poetry run aisbom
aisbom scan .You will see the scanner flag mock_malware.pt as CRITICAL and mock_restricted.safetensors as a LEGAL RISK.
AIsbom uses a static analysis engine to disassemble Python Pickle opcodes. It looks for specific GLOBAL and STACK_GLOBAL instructions that reference dangerous modules:
- os / posix (System calls)
- subprocess (Shell execution)
- builtins.eval / exec (Dynamic code execution)
- socket (Network reverse shells)
Add AIsbom to your CI/CD pipeline to block unsafe models before they merge.
name: AI Security Scan
on: [pull_request]
jobs:
aisbom-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Scan AI Models
uses: Lab700xOrg/aisbom@v0
with:
directory: '.'