Software supply chain security,
built for enterprise teams
dpndncY is an enterprise-grade, self-hosted software supply chain security platform. Its lead product is the Dependency Firewall — pre-install enforcement that blocks risky packages before they enter node_modules, site-packages, or your local Maven repository. Each block / allow / bypass carries a signed JWS attestation with EPSS, CISA KEV, ExploitDB, reachability, attack-path, and trust-delta evidence — verifiable offline with the public key.
The same multi-signal stack also powers full software composition analysis (SCA), static application security testing (SAST), container image scanning, IaC scanning, and secrets detection — all within your own infrastructure perimeter.
Deployable as a Docker container, a Kubernetes workload via Helm, or a Windows Server installer — no developer toolchain required on the host.
Architecture
dpndncY runs as a single containerized process backed by an embedded SQLite database. There are no external service dependencies — no Redis, no Postgres, no message queue — making deployment simple and operations lightweight.
| Component | Description |
|---|---|
| HTTP API Server | Express-based REST API. All scan orchestration, authentication, and data access. |
| Embedded Database | SQLite via better-sqlite3. Schema auto-migrates on startup. Persisted via volume mount. |
| SCA Engine | Dependency manifest parser + multi-source vulnerability enrichment (OSV, NVD, GHSA, CISA KEV, EPSS). |
| SAST Engine | Proprietary 300+ rule engine: JS/TS taint analysis, Python AST analysis, multi-language pattern scanner. All run in-process. |
| Attack Path Engine | Graph builder, path finder, CWE-to-CVE correlation, exploitability scorer. |
| Web Frontend | Single-page application served as static files from the container. |
During scans, dpndncY queries external vulnerability databases (OSV.dev, NVD, GHSA) over HTTPS. Only package names, versions, and hashes are transmitted — never source code. All scan results, findings, and metadata remain exclusively inside your environment.
Quick Start
The fastest path to a running instance is Docker Compose. Copy the snippet below, fill in three environment values, and you're up.
docker-compose.ymlversion: "3.8"
services:
dpndncy:
image: dpndncy/platform:2.9.0
restart: unless-stopped
ports:
- "3000:3000"
environment:
JWT_SECRET: "change-to-a-long-random-string"
ADMIN_EMAIL: "admin@yourcompany.com"
ADMIN_PASSWORD: "change-me-on-first-login"
volumes:
- dpndncy_data:/app/data
volumes:
dpndncy_data:
docker compose up -d
# Open http://dpndncy.com — log in with ADMIN_EMAIL / ADMIN_PASSWORD
See Docker deployment for production-hardened configuration, or Kubernetes + Helm for enterprise-scale deployments.
Docker Deployment
Docker Compose is the recommended deployment method for teams that want a production-ready instance without Kubernetes overhead. Requires Docker Engine 20.10+ and Docker Compose v2.
Production docker-compose.yml
docker-compose.ymlversion: "3.8"
services:
dpndncy:
image: dpndncy/platform:2.9.0
restart: unless-stopped
ports:
- "127.0.0.1:3000:3000" # Bind to localhost; expose via reverse proxy
environment:
NODE_ENV: production
JWT_SECRET: "${JWT_SECRET}"
ADMIN_EMAIL: "${ADMIN_EMAIL}"
ADMIN_PASSWORD: "${ADMIN_PASSWORD}"
SESSION_DURATION: "8h"
# GitHub integration (optional)
GITHUB_TOKEN: "${GITHUB_TOKEN}"
# Email notifications (optional)
SMTP_HOST: "${SMTP_HOST}"
SMTP_PORT: "587"
SMTP_USER: "${SMTP_USER}"
SMTP_PASS: "${SMTP_PASS}"
volumes:
- dpndncy_data:/app/data
- dpndncy_scans:/app/data/scans # Scan history & snapshots
healthcheck:
test: ["CMD", "wget", "-qO-", "http://dpndncy.com/api/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
volumes:
dpndncy_data:
dpndncy_scans:
Store secrets in a .env file alongside the compose file (never commit it to source control):
JWT_SECRET=your-very-long-random-secret-64-chars-minimum
ADMIN_EMAIL=admin@yourcompany.com
ADMIN_PASSWORD=initial-password-change-on-first-login
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_USER=dpndncy-alerts@yourcompany.com
SMTP_PASS=your-smtp-password
Reverse proxy with nginx
In production, run dpndncY behind nginx or another reverse proxy for TLS termination:
/etc/nginx/sites-available/dpndncyserver {
listen 443 ssl http2;
server_name sca.yourcompany.com;
ssl_certificate /etc/ssl/certs/sca.yourcompany.com.crt;
ssl_certificate_key /etc/ssl/private/sca.yourcompany.com.key;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 120s;
client_max_body_size 50m;
}
}
server {
listen 80;
server_name sca.yourcompany.com;
return 301 https://$host$request_uri;
}
Starting and managing the service
# Start in background
docker compose up -d
# View logs
docker compose logs -f dpndncy
# Restart after config change
docker compose restart dpndncy
# Stop
docker compose down
# Update to a new version (data persists in named volumes)
docker compose pull
docker compose up -d
Kubernetes + Helm
The dpndncY Helm chart deploys the platform as a Kubernetes Deployment with a PersistentVolumeClaim for data, a Service, and an optional Ingress resource. Requires Kubernetes 1.24+ and Helm 3.x.
Add the Helm repository
helm repo add dpndncy https://charts.dpndncy.dev
helm repo update
Install with minimal configuration
helm install dpndncy dpndncy/dpndncy-platform \
--namespace security \
--create-namespace \
--set auth.jwtSecret="your-secret-here" \
--set admin.email="admin@yourcompany.com" \
--set admin.password="initial-password"
Production values file
values-prod.yamlreplicaCount: 2
image:
repository: dpndncy/platform
tag: "2.9.0"
pullPolicy: IfNotPresent
auth:
jwtSecret: "" # Set via --set or secretRef
sessionDuration: "8h"
admin:
email: "admin@yourcompany.com"
password: "" # Set via --set or secretRef
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: sca.yourcompany.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: dpndncy-tls
hosts:
- sca.yourcompany.com
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
github:
token: "" # GITHUB_TOKEN — set via secretRef
smtp:
host: "smtp.yourcompany.com"
port: 587
user: ""
pass: ""
sso:
oidcIssuer: ""
clientId: ""
clientSecret: ""
helm install dpndncy dpndncy/dpndncy-platform \
--namespace security \
--create-namespace \
-f values-prod.yaml \
--set auth.jwtSecret="$(openssl rand -hex 32)" \
--set admin.password="your-initial-password"
Secrets management
For production, store sensitive values as Kubernetes Secrets and reference them via the chart's existingSecret option rather than passing them as Helm values:
kubectl create secret generic dpndncy-secrets \
--namespace security \
--from-literal=jwtSecret="$(openssl rand -hex 32)" \
--from-literal=adminPassword="your-password" \
--from-literal=githubToken="ghp_xxxx"
# In values-prod.yaml:
existingSecret: dpndncy-secrets
Upgrade
helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
--namespace security \
-f values-prod.yaml \
--set auth.jwtSecret="existing-secret"
The PVC is not deleted on helm upgrade or helm uninstall by default. Your scan history and database are retained across version upgrades.
Windows Installer
dpndncY is available as a signed Windows installer (.exe) for deployment on Windows Server 2019 or later. The installer handles all dependencies and installs dpndncY as a Windows Service that starts automatically with the OS.
Prerequisites
- Windows Server 2019, 2022, or Windows 10/11 (64-bit)
- 4 GB RAM minimum; 8 GB recommended
- Administrator privileges for installation
- Outbound HTTPS (port 443) to OSV.dev, NVD, and GHSA for vulnerability data
- No Node.js, Python, or other runtime required — all bundled in the installer
Installation steps
Download the installer
Download dpndncY-Setup-2.9.0-x64.exe from your licensed download portal or request it via License Request.
Run as Administrator
Right-click the installer → Run as administrator. The setup wizard will launch.
Configure the installation
The wizard will ask for:
- Install directory (default:
C:\Program Files\dpndncY) - Data directory (default:
C:\ProgramData\dpndncY) — keep on a drive with sufficient space - HTTP port (default:
3000) - Admin email and initial password
- JWT secret — auto-generated if left blank
- Service account — by default runs as
NT AUTHORITY\NetworkService
Complete installation
Click Install. The wizard will install all bundled runtimes, configure the service, and open the firewall rule for the selected port.
Access the platform
Open http://dpndncy.com (or the configured port) in a browser. Log in with the admin credentials you set during installation.
Managing the Windows Service
# View service status
sc query dpndncy
# Stop / Start / Restart
net stop dpndncy
net start dpndncy
# Or via Services MMC (services.msc) — look for "dpndncY Platform"
Post-install configuration
After installation, edit the configuration file at C:\ProgramData\dpndncY\config.env to add integration credentials (GitHub token, SMTP settings, OIDC, etc.), then restart the service.
# C:\ProgramData\dpndncY\config.env
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_PORT=587
SMTP_USER=dpndncy@yourcompany.com
SMTP_PASS=your-smtp-password
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx
Uninstallation
Use Control Panel → Programs → Uninstall a program → dpndncY. The uninstaller removes the service and binaries. The data directory (C:\ProgramData\dpndncY) is preserved — delete it manually if you want to remove all data.
Upgrade
Run the new version's installer over the existing installation. The installer detects the previous version, stops the service, updates the binaries, and restarts — your data and configuration are preserved.
System Requirements
| Deployment | Minimum | Recommended (production) |
|---|---|---|
| Docker | Docker Engine 20.10, 2 vCPU, 4 GB RAM, 20 GB disk | 4 vCPU, 8 GB RAM, 100 GB disk (for large scan histories) |
| Kubernetes | K8s 1.24+, 2 vCPU, 4 GB RAM per pod, 20 GB PVC | 4 vCPU, 8 GB RAM, 100 GB PVC; 2 replicas |
| Windows Installer | Windows Server 2019, 4 vCPU, 4 GB RAM, 20 GB | 8 vCPU, 8 GB RAM, 100 GB on a dedicated drive |
| Requirement | Detail |
|---|---|
| Outbound HTTPS | Port 443 to api.osv.dev, services.nvd.nist.gov, api.github.com, api.first.org (EPSS) |
| Inbound HTTP | Port 3000 (configurable). Expose via reverse proxy with TLS for production. |
| Storage I/O | SSD or NVMe recommended for the data volume. SQLite is write-heavy during large scans. |
| No runtime dependencies | All runtimes (Node.js, Python) are bundled inside the container / Windows installer. The host only needs Docker or Windows Server. |
Configuration Reference
Configuration is provided via environment variables. In Docker deployments, set them in docker-compose.yml or a .env file. In Kubernetes, use Helm values or a Secret. In the Windows installer, edit C:\ProgramData\dpndncY\config.env.
Core
| Variable | Default | Description |
|---|---|---|
JWT_SECRET | required | Secret for signing session tokens. Minimum 32 random characters. Rotate this to invalidate all active sessions. |
PORT | 3000 | HTTP port the server binds to |
NODE_ENV | production | Set to production. Enables secure cookie flags and disables debug output. |
SESSION_DURATION | 8h | Validity period for browser session tokens (e.g. 4h, 1d) |
DB_PATH | /app/data/dpndncy.db | Path to the SQLite database file. Must be on a persistent volume. |
Admin account
| Variable | Description |
|---|---|
ADMIN_EMAIL | Email for the default admin account, created on first startup only |
ADMIN_PASSWORD | Initial password. Change immediately after first login via Profile → Change Password |
SAST engine tuning
| Variable | Default | Description |
|---|---|---|
SAST_MAX_RUNTIME_SEC | 300 | Max wall-clock seconds per SAST scan before forced timeout. Increase for large monorepos. |
SAST_STORAGE_PATH | /app/data/sast | Directory for SARIF output files |
Email (SMTP)
| Variable | Description |
|---|---|
SMTP_HOST | SMTP relay hostname (e.g. smtp.office365.com, smtp.gmail.com) |
SMTP_PORT | 587 for STARTTLS (recommended), 465 for implicit TLS, 25 for unauthenticated relay |
SMTP_USER | SMTP authentication username |
SMTP_PASS | SMTP password or app password |
SMTP_FROM | From address for notifications (e.g. dpndncy@yourcompany.com) |
Integrations
| Variable | Description |
|---|---|
GITHUB_TOKEN | GitHub PAT with repo scope — enables repository listing and remediation PRs |
GITLAB_TOKEN | GitLab PAT with api scope |
GITLAB_URL | GitLab instance URL (default: https://gitlab.com). Set for self-hosted GitLab. |
SLACK_WEBHOOK_URL | Slack incoming webhook URL for scan notifications |
DISCORD_WEBHOOK_URL | Discord webhook URL |
OIDC_ISSUER | OIDC issuer URL (Okta, Azure AD, Auth0) |
OIDC_CLIENT_ID | OIDC client ID |
OIDC_CLIENT_SECRET | OIDC client secret |
OIDC_CALLBACK_URL | Full callback URL, e.g. https://sca.yourcompany.com/auth/oidc/callback |
First-Time Setup
Log in with the admin account
Open the platform URL in a browser. Log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured.
Change the admin password
Go to Profile → Change Password. The initial password is a placeholder — change it immediately to a strong credential.
Connect your source code repositories
Go to Settings → Integrations and connect GitHub or GitLab. This enables repository browsing and remediation PRs/MRs.
Run your first scan
Go to Scans → New Scan. Select a repository or enter a local path (accessible from the container). Click Start Scan.
Invite team members
Go to Settings → Users → Invite User. Assign the viewer role for read-only access or admin for full access. Or configure SSO for automatic provisioning.
Configure CI/CD integration
Generate a Personal API Token and add it to your CI/CD pipeline secrets. Use the CI/CD examples to add security gates to your pipelines.
Dependency Firewall
The Dependency Firewall is dpndncY's pre-install enforcement layer. It evaluates every {ecosystem, name, version} request against vulnerability data, exploitability signals, trust score, license policy, and tenant-specific rules — and returns an allow / block / review decision before the package ever lands in node_modules, site-packages, or your local Maven repository.
Why pre-install
Post-scan SCA tools (Snyk, Black Duck, Dependabot) tell you what's wrong after a vulnerable package is already in your tree. The firewall stops the install in the first place. The same multi-signal exploitability stack that powers dpndncY's prioritization — CISA KEV, EPSS, ExploitDB, JS/TS reachability, attack-path graphs, trust score, license obligations — is applied at install time, not after.
Decision evidence (signed)
Every decision the firewall makes carries a JSON Web Signature (JWS) attestation containing:
- The decision and rationale (Patch Now / Patch This Sprint / Monitor / Accept Risk / Block / Allow)
- Each signal value with its source: EPSS score and snapshot URL, CISA KEV catalog version, ExploitDB entry IDs, reachability proofs (file:line of vulnerable function calls), CVSS vector and score
- The trust score and trust delta vs. the previously approved version
- The policy ID and version that was applied
- Scan ID, project ID, tenant ID, and timestamp
Attestations are signed with the dpndncY licensing keypair. Any party with the public key can verify the bundle offline — auditors, CI pipelines storing build artifacts, downstream customers proving supply-chain hygiene.
Modes
- Enforce — block requests that violate policy. Bypass requires a signed waiver.
- Soak / monitor-only — log every decision without blocking, for a configurable rollout phase.
- Review — route ambiguous decisions to a human approver with full evidence attached.
Trust-delta gating
Beyond absolute thresholds, the firewall flags any package whose trust score has dropped relative to the previously approved version — catching typosquats, package takeovers, and maintainer rotations that absolute thresholds miss.
Bypass and audit
Bypassing the firewall always requires one of: a human approver, a policy waiver with an expiry date, or an emergency token. Every bypass attempt is itself audited and signed — even bypassed installs leave an evidence trail.
Engine
The firewall engine ships in src/firewall/: evaluator.js (decision logic), packageRequest.js (request normalization), policy.js (policy evaluation), safeVersions.js (allowed-version resolution). The registry-proxy layer for transparent enforcement at the package-manager level (npm, PyPI, Maven Central, NuGet, RubyGems, Crates.io, proxy.golang.org) is in active build-out.
SCA Scanning
Software Composition Analysis (SCA) scans dependency manifests, resolves the full dependency tree, and checks each package against multiple vulnerability databases. Results are enriched with real-world exploitability signals to help teams prioritize what actually matters.
How it works
- dpndncY traverses the target directory for supported manifest and lock files
- Dependency trees are parsed — direct and transitive dependencies at exact resolved versions
- Package identifiers are queried against OSV, NVD, GHSA, CISA KEV, and EPSS
- Findings are enriched with CVSS scores, EPSS probability, KEV status, and ExploitDB references
- A composite risk score is computed per vulnerability combining all signals
- Results are stored and surfaced in the UI with remediation guidance and upgrade paths
Supported Ecosystems (17)
17 dedicated ecosystem scanners plus a generic fallback. Both the Dependency Firewall and post-scan SCA share the same parser stack — one source of truth.
| Ecosystem | Manifest files detected | Lock file support |
|---|---|---|
| npm / Node.js | package.json | package-lock.json, yarn.lock, pnpm-lock.yaml |
| Python | requirements.txt, Pipfile, pyproject.toml | Pipfile.lock, poetry.lock |
| Java / Maven | pom.xml | — |
| Java / Gradle | build.gradle, build.gradle.kts | gradle.lockfile |
| Go | go.mod | go.sum |
| .NET / NuGet | *.csproj, packages.config | packages.lock.json |
| Ruby | Gemfile | Gemfile.lock |
| PHP / Composer | composer.json | composer.lock |
| Rust / Cargo | Cargo.toml | Cargo.lock |
| C / C++ | conanfile.txt, vcpkg.json, CMakeLists.txt | — |
| Perl / CPAN | cpanfile, META.json, Makefile.PL | — |
| R / CRAN | DESCRIPTION, renv.lock | renv.lock |
| Dart / Pub | pubspec.yaml | pubspec.lock |
| Elixir / Hex | mix.exs | mix.lock |
| OCaml / OPAM | *.opam, opam.locked | opam.locked |
| Swift / SPM | Package.swift | Package.resolved |
| Generic fallback | Multi-language reachability scanner for any ecosystem without a dedicated parser | — |
Lock files are used when present. They contain exact resolved versions for the entire dependency tree, resulting in more accurate CVE matching than manifest files alone.
Vulnerability Sources
| Source | Data provided |
|---|---|
| OSV.dev | Open source vulnerability database (Google). Primary advisory source for npm, PyPI, Maven, Go, NuGet, Cargo, RubyGems. |
| NVD | NIST National Vulnerability Database. CVSS v3.1 base scores and vector strings. |
| GHSA | GitHub Security Advisories. Earlier disclosure, ecosystem-enriched detail. |
| CISA KEV | CISA Known Exploited Vulnerabilities catalog. Any CVE here is actively exploited in the wild — highest priority. |
| EPSS | Exploit Prediction Scoring System (FIRST.org). 0–1 probability of exploitation in the next 30 days. |
| ExploitDB | Public exploit code database. Presence of working exploit code amplifies severity. |
SAST Scanning
The dpndncY SAST engine performs static analysis on your source code using three parallel analyzers. 404 rules across 13+ languages. No external SAST tool installation is required — all analysis runs inside the container.
| Analyzer | Languages | Method |
|---|---|---|
| Taint Analyzer | JavaScript, TypeScript, Python | Intra-function data flow tracking from user-controlled sources to dangerous sinks, with call graph resolution up to depth 5. GraphQL resolver args.* and tRPC input.* as taint sources; Sequelize/Knex raw SQL methods, Fastify reply.send(), email transporter sinks all modeled. |
| Lang-specific AST Analyzer | Java, Kotlin, Go, C#, PHP, Ruby, Scala, Swift, Dart, Apex, VB.NET, Objective-C, C/C++ | Per-language AST analyzers detect SQL injection, XXE, SSRF, path traversal, deserialization, mass assignment, and language-idiomatic anti-patterns. Java analyzer covers Spring framework controllers; Kotlin analyzer adds JDBC string templates and Spring Boot cross-file analysis. |
| Pattern Analyzer | All 13+ languages + IaC + secrets | 404 regex/AST patterns covering injection, crypto misuse, secrets, insecure APIs, framework misconfiguration. Inline suppression supported via // dpndncy-ignore, // nosec, # noqa, // NOSONAR, // lgtm. |
Starting a SAST scan via API
POST /api/sast/scan
Authorization: Bearer <token>
Content-Type: application/json
{
"repoPath": "/mnt/repos/myapp",
"branch": "feature/payment-refactor",
"baseBranch": "main",
"deltaOnly": true
}
Scans run asynchronously. Poll the run status:
GET /api/sast/runs/:runId
# status: "pending" | "running" | "completed" | "failed"
Supported Languages
| Language | Rules | Analysis depth |
|---|---|---|
| JavaScript / TypeScript | 80+ | Full taint tracking with call graph; GraphQL/tRPC sources; Sequelize/Knex/Fastify sinks |
| Python | 55+ | AST taint analysis — subprocess/exec/eval/deserialization/SMTP/redirect sinks; XXE for lxml/ElementTree |
| Java | 34+ | SQL injection, XXE, SSRF, insecure deserialization, path traversal, mass assignment, CSRF disable |
| Kotlin | 52 | All 34 Java rules plus 6 Kotlin-specific (JDBC string templates, ProcessBuilder, File path, URL SSRF, ObjectInputStream, hardcoded creds). Spring Boot cross-file analysis. |
| C# | 25+ | SQL injection, LDAP injection, XSS, insecure cryptography, mass assignment binding |
| Go | 25+ | Command injection, SSRF, path traversal, weak crypto, missing HTTP timeouts, XPath injection |
| PHP | 25+ | SQL injection, XSS, eval injection, file inclusion, SSRF, XPath, missing CSRF token |
| Ruby | 20+ | SQL injection, command injection, mass assignment, SSRF, XXE via Nokogiri, XPath, skip_before_action |
| C / C++ | 30+ | Buffer overflows, format strings, unsafe functions (gets/scanf/strcpy), TOCTOU, MD5/SHA1, double-free, UAF |
| Scala / Swift / Dart / Apex / VB.NET / Objective-C | tier-4 build-aware | Build-context-activated framework model packs (Play WS, Alamofire, Dio, AFNetworking, RestSharp). Source/sink propagation across helpers. |
| IaC | 30+ | Terraform, CloudFormation (JSON+YAML), Kubernetes manifests — CWE-269 privesc, CWE-22 path traversal, capability misconfigurations, exposed ports, weak secrets |
| Secrets (all files) | 731 rules | AWS, GCP, Azure, GitHub/GitLab tokens, OpenAI/Anthropic keys, private keys, JWTs, DB connection strings, and more |
Rule Engine
Each rule defines an id, severity (CRITICAL HIGH MEDIUM LOW), confidence (HIGH for taint-confirmed, MEDIUM for pattern-matched), associated CWEs, and remediation guidance.
CWE identifiers in SAST findings are correlated with CVEs in SCA results to compute Attack Path boosts — a SAST finding in the same package as a CVE with a matching CWE scores 1.3× higher.
Suppressing findings
POST /api/sast/runs/:runId/suppress
Authorization: Bearer <token>
{
"findingId": "uuid",
"reason": "False positive — input validated by middleware"
}
Suppressed findings remain in the audit log but are excluded from policy evaluation and dashboard counts.
Attack Path Graph
The Attack Path Graph maps how an attacker could move from a vulnerable dependency through your codebase to a reachable entry point. It combines SCA vulnerability data with SAST code findings and import resolution to produce scored, prioritized attack chains.
Path score formula
score = depRiskScore × reachabilityWeight × sinkWeight × aiAmplification × cweBoost
depRiskScore → CVSS + EPSS composite (0–10)
reachabilityWeight → 1.0 imported / 1.3 called directly
sinkWeight → 1.5 SQL/exec, 1.3 path/SSRF, 1.0 log
aiAmplification → 1.0–1.2 based on AI risk context
cweBoost → 1.3× when SAST CWE matches CVE CWE
Range: [0, 2.0]
API
GET /api/scans/:id/attack-graph # Full graph (nodes + edges)
GET /api/scans/:id/attack-path/:pathId # Path detail + narrative explanation
Container Image Scanning
dpndncY parses Docker-save tarballs and extracts the dependency manifest from each image layer across 9 ecosystems. Live registry pull (Docker Hub, ECR, GCR, Quay) is in active build-out.
Supported per-layer ecosystems
- OS packages: Debian (
/var/lib/dpkg/status), Alpine (/lib/apk/db/installed), RPM (/var/lib/rpm/Packages) - Application packages per layer: npm, PyPI, Go modules, RubyGems, PHP Composer, Rust Cargo, .NET
- Dockerfile linting: when present in the tarball
- Secrets scanning: optional secret scan across layer files using the same 731-rule scanner
API
POST /api/scan/container
Authorization: Bearer <token>
Content-Type: multipart/form-data
# Body:
# image=<docker-save-tarball.tar>
# imageRef=docker.io/library/nginx:1.27 (optional metadata)
Infrastructure-as-Code (IaC) Scanning
IaC scanning runs as part of the SAST workflow. Detects security misconfigurations across Terraform, CloudFormation (JSON + YAML), and Kubernetes manifests.
| Format | Detection | Example checks |
|---|---|---|
| Terraform | .tf files | Public S3 buckets, open security groups, unencrypted RDS, missing CloudTrail, hard-coded credentials |
| CloudFormation | .yaml, .yml, .json with AWSTemplateFormatVersion | Same controls as Terraform plus stack-specific checks. JSON support added in v2.8. |
| Kubernetes | .yaml/.yml with apiVersion + kind | Privilege escalation (CWE-269), path traversal (CWE-22), privileged: true, missing securityContext, host-network/host-PID, default service accounts |
| Dockerfile | Within tarballs | Best-practice lints alongside container scan |
Non-IaC JSON (package.json, tsconfig.json, etc.) is excluded from CloudFormation rule matching.
Secrets Detection
731-rule scanner runs alongside SCA and SAST. Detects credentials and tokens across all source files and configuration formats.
- Cloud provider keys: AWS access keys, GCP service account JSON, Azure connection strings
- SCM tokens: GitHub PAT (
github_pat_*,ghp_*), GitLab tokens, Bitbucket app passwords - API keys: OpenAI (
sk-*), Anthropic (sk-ant-*), Stripe, SendGrid, Mailgun, Twilio - Cryptographic material: PEM private keys (RSA / EC / OpenSSH / PGP), JWT bearer tokens
- Database connection strings: PostgreSQL, MySQL, MongoDB, Redis URIs with embedded credentials
- Inline suppression: same comment markers as SAST
Decision Engine & Signed Evidence
Every vulnerability gets a prioritized decision and SLA. Every decision — firewall block, Patch-Now triage, Accept-Risk — produces a signed JWS attestation.
Decision matrix
| Decision | SLA | Triggers |
|---|---|---|
| Patch Now | 48h | CISA KEV listed; OR EPSS ≥ 0.85; OR EPSS ≥ 0.7 with reachable code path |
| Patch This Sprint | 336h (14d) | EPSS ≥ 0.3; OR Critical severity with fix available; OR public ExploitDB entry; OR reachable Critical/High |
| Monitor | 720h (30d) | EPSS ≥ 0.05; OR High severity; OR reachable Medium |
| Accept Risk | — | No active exploitation signals |
Signed JWS attestation
Each decision produces a JSON Web Signature bundle containing:
{
"decision": "Patch Now",
"urgency": "critical",
"slaHours": 48,
"rationale": [
"Listed in CISA Known Exploited Vulnerabilities catalog",
"EPSS score 0.912 indicates very high exploitation probability",
"Reachable: minimist.parse() used in src/server.js, src/config.js"
],
"evidence": {
"epss": { "value": 0.912, "source": "https://api.first.org/data/v1/epss?cve=CVE-XXXX-XXXX", "fetched_at": "2026-04-28T14:30:00Z" },
"kev": { "catalog_version": "2026-04-27", "listed_at": "2026-03-15" },
"exploitDb": [{ "edb_id": "51234", "url": "https://www.exploit-db.com/exploits/51234" }],
"reachability": [{ "function": "minimist.parse", "file": "src/server.js", "line": 42 }],
"cvss": { "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "score": 9.8 }
},
"policy": { "id": "policy_pci_dss_default", "version": 7 },
"trustDelta": -28,
"scan_id": "...", "project_id": "...", "tenant_id": "...",
"timestamp": "2026-04-28T14:30:42Z"
}
Signed with the dpndncY licensing keypair. Verify offline with the public key — auditors, downstream customers, CI pipelines storing build artifacts.
Trust Engine & Patch Guidance
Every package gets a 0–100 trust score derived from explainable factors. The score also drives the firewall's trust-delta gating — alerts when a version's score drops vs. the last approved version (catches typosquats, takeovers, and maintainer rotations).
Factors
- Maintainer count and historical activity
- Release cadence (stale packages flagged; flash-flood new packages flagged)
- Install-script presence and risk class
- License clarity (declared vs. inferred vs. unknown)
- Vulnerability history (count and recency)
- Anomaly index (e.g., new package + install scripts + no maintainer history)
- Coverage confidence (how much metadata was available to score against)
Patch guidance
For each package the trust engine emits a recommended target version with semver delta classification (patch / minor / major), the earliest non-vulnerable target, and tie-broken alternatives. This drives auto-fix PR generation.
Auto-Fix Pull Requests
Open pull requests on GitHub, GitLab, and self-hosted instances with version bumps, lockfile regeneration, and breaking-change analysis attached.
Coverage
| Ecosystem | Manifest | Lockfile |
|---|---|---|
| npm | package.json | package-lock.json, yarn.lock, pnpm-lock.yaml |
| PyPI | requirements.txt | poetry.lock, Pipfile.lock |
| Maven | pom.xml | — |
| Go | go.mod | go.sum (regenerated) |
| Cargo | Cargo.toml | Cargo.lock |
| NuGet | packages.config | — |
| Composer | composer.json | — |
| RubyGems | Gemfile | Gemfile.lock |
Breaking-change analysis
Pre-flight diff between current and target version surfaces removed exports, signature changes, and major version bumps. The PR description includes the breaking-change summary so reviewers see the surface area before merging.
Platform support
- GitHub.com and self-hosted GitHub Enterprise Server
- GitLab.com and self-hosted GitLab CE/EE
- Bitbucket Cloud (early)
License Compliance & Obligations
Beyond allow/deny: surfaces the actual obligations triggered by each license — what your legal team needs to do, not just what's blocked.
- SPDX-aligned license normalization
- Obligation graph: attribution, source disclosure, copyleft scope (file / module / derivative work), patent grant, NOTICE file requirements, modifications statement
- Conflict detection: GPL + proprietary, AGPL + SaaS, copyleft + closed-source distribution
- License-cache for offline / air-gapped lookups
- Pre-install firewall enforcement: block GPL contamination before it lands in the tree
Dependency Health Scoring
Per-package health score independent of CVE status — future risk indicators, not just known vulnerabilities. Surfaces low-health packages before they get a CVE.
- Maintainer count, activity, response time
- Release cadence and last-release recency
- License clarity
- Anomaly signals (e.g., sudden ownership transfer, recent install-script addition)
- Historical vulnerability density
Notifications
Native formatting per platform — auto-detected by webhook hostname.
| Platform | Format | Detection |
|---|---|---|
| Slack | Block Kit with severity-coded sections | hooks.slack.com |
| Microsoft Teams | Adaptive Card with action buttons | webhook.office.com / outlook.office365.com |
| Discord | Rich embed with severity color | discord.com / discordapp.com |
| Generic webhook | JSON payload | Anything else (PagerDuty, Opsgenie, custom endpoints) |
| SMTP with HTML + plaintext | Configured per tenant via SMTP settings |
Triggers: new findings, policy failures, firewall blocks, scan completion, license violations, trust-delta drops.
Jira & Linear Ticketing
Native API integrations for Jira (cloud and self-hosted Data Center) and Linear. Auto-create tickets from findings or firewall events; round-trip status sync back to dpndncY.
- Per-tenant config: project key, issue type, default assignee, priority mapping
- Bulk-create tickets from a filtered finding view
- Ticket includes severity, evidence bundle link, remediation guidance, and the signed JWS attestation
- Two-way sync: closing the ticket marks the finding as remediated in dpndncY
Policy Engine
Define security policies to gate CI/CD pipelines. Policies evaluate findings against thresholds, blocked rules, and EPSS minimums. A failed policy returns a non-zero exit code that fails the build.
Policy configuration
{
"thresholds": {
"critical": 0, // fail if any CRITICAL findings
"high": 3,
"medium": null, // null = no limit
"low": null
},
"blockedRules": [
"JS-TAINT-SQL-001",
"PY-EXEC-001"
],
"deltaOnly": true, // only evaluate findings in changed lines (PR gate)
"minEpss": 0.4 // only count vulns with EPSS ≥ 0.4
}
Policy evaluation
POST /api/sast/policy/evaluate
{
"runId": "uuid",
"policy": { ... }
}
# Response:
{
"passed": false,
"violations": [
{ "rule": "critical threshold", "found": 2, "limit": 0 }
]
}
SBOM & Export
| Format | Endpoint | Use case |
|---|---|---|
| CycloneDX 1.4 JSON | GET /api/scans/:id/sbom | SBOM for compliance, procurement, auditors |
| SARIF 2.1.0 | GET /api/sast/runs/:id/sarif | SAST findings for GitHub Code Scanning, Azure DevOps, IDE plugins |
| CSV | GET /api/scans/:id/export/csv | SCA findings for reporting, spreadsheet analysis |
Scan History & Trends
Each completed scan saves a snapshot. The trend engine compares consecutive snapshots to compute a risk delta: new findings, resolved findings, and change in composite risk score. Trend data powers the dashboard timeline chart.
GET /api/scans/:id/history # List historical snapshots for a project
GET /api/scans/:id/trend # Risk delta between last 2 snapshots
AI Risk Detection
dpndncY flags AI/ML-specific security risks in addition to standard vulnerabilities:
- Model loading via insecure deserialization (
pickle, unsafetorch.load) - LLM prompt injection surface in AI framework integrations
- Model supply chain risks: packages that download models from unverified registries at runtime
- Training data exposure via logging, serialization, or external API calls
Authentication
| Type | Lifetime | Use case |
|---|---|---|
| Session token | 8h (configurable) | Browser UI. Issued on login, stored as HTTP-only cookie. |
| Personal API Token (PAT) | 1 year (configurable) | CI/CD pipelines, VS Code extension, API scripts. Passed as Authorization: Bearer <token>. |
Creating a PAT
Via UI: Profile → API Tokens → Create Token
POST /api/tokens
Authorization: Bearer <session-token>
{ "name": "GitHub Actions", "expiresIn": "365d" }
# Save the returned token value — shown only once
API Reference
Base URL: https://sca.yourcompany.com. All endpoints require Authorization: Bearer <token> unless noted.
Scans (SCA)
{ "repoPath": "/path/or/git-url", "branch": "main", "label": "optional" }SAST
{ "repoPath": "/path", "branch": "feat/x", "baseBranch": "main", "deltaOnly": true }Packages (VS Code / quick check)
{ "packages": [{ "name": "lodash", "version": "4.17.15", "ecosystem": "npm" }] }Tokens
CI/CD Integration
Use a Personal API Token to add dpndncY security gates to your pipeline. The typical pattern: scan → poll until complete → evaluate policy → fail build on violation.
GitHub Actions
.github/workflows/security.ymlname: Security Gate
on: [push, pull_request]
jobs:
dpndncy-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: SCA Scan
id: sca
run: |
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}"}')
echo "scan_id=$(echo $RESULT | jq -r .id)" >> $GITHUB_OUTPUT
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
- name: SAST Scan
id: sast
run: |
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}","baseBranch":"main","deltaOnly":true}')
RUN_ID=$(echo $RESULT | jq -r .runId)
# Poll until complete
for i in $(seq 1 30); do
STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" | jq -r .status)
[ "$STATUS" = "completed" ] && break
sleep 10
done
echo "run_id=$RUN_ID" >> $GITHUB_OUTPUT
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
- name: Policy Gate
run: |
POLICY='{"thresholds":{"critical":0,"high":5},"deltaOnly":true}'
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"runId\":\"${{ steps.sast.outputs.run_id }}\",\"policy\":$POLICY}")
echo $RESULT | jq .
echo $RESULT | jq -e '.passed == true'
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
GitLab CI
.gitlab-ci.ymlsecurity-scan:
stage: test
image: curlimages/curl:latest
script:
- |
SCAN=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\"}")
SCAN_ID=$(echo $SCAN | grep -o '"id":"[^"]*"' | cut -d'"' -f4)
SAST=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\",\"deltaOnly\":true}")
RUN_ID=$(echo $SAST | grep -o '"runId":"[^"]*"' | cut -d'"' -f4)
for i in $(seq 1 30); do
STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
-H "Authorization: Bearer $DPNDNCY_TOKEN" | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
[ "$STATUS" = "completed" ] && break
sleep 10
done
POLICY_RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"runId\":\"$RUN_ID\",\"policy\":{\"thresholds\":{\"critical\":0},\"deltaOnly\":true}}")
echo $POLICY_RESULT | grep -q '"passed":true' || (echo "Security policy failed" && exit 1)
variables:
DPNDNCY_URL: https://sca.yourcompany.com
In GitHub: Repository → Settings → Secrets → Actions. In GitLab: Settings → CI/CD → Variables. Mark them as protected and masked.
CLI Tool — Overview & Installation
The dpndncY CLI is a single standalone binary — no Node.js or runtime required on the machine running it. Download, configure once with your server URL and API token, then scan any local path, Git repo, zip archive, or container image from the terminal.
dpndncy-win.exe — runs on Windows 10/11 and Server 2019+. Works in CMD, PowerShell, and Windows Terminal.dpndncy-linux — static x64 binary. Works on Ubuntu, Debian, RHEL, Alpine, and any 64-bit Linux distro.dpndncy-mac — x64 binary for macOS 12+. Works in Terminal and CI agents on macOS runners.Windows setup
# 1. Download from GitHub Releases
# https://github.com/dpndncy/cli/releases/latest
# 2. (Optional) Add to PATH so 'dpndncy' works from any directory
# Copy dpndncy-win.exe to C:\Windows\System32\dpndncy.exe
# or add the folder to your PATH environment variable
# 3. Configure your server
dpndncy login --server https://sca.yourcompany.com --token dpat_your_token_here
# 4. Verify connection
dpndncy status
Linux / macOS setup
# Download (Linux example)
curl -L https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux -o dpndncy
chmod +x dpndncy
sudo mv dpndncy /usr/local/bin/
# Configure
dpndncy login --server https://sca.yourcompany.com --token dpat_your_token_here
# Verify
dpndncy status
Credentials file
dpndncy login saves your server URL and token to ~/.dpndncy/config.json so you don't need to pass them on every scan. You can override them per-command with --server and --token.
Scan Command Reference
The main command. Run a security scan against a local path, repository, archive, or container image — all engines optional, all combinable.
dpndncy scan [path] [flags]
Scan engines
| Flag | Engine | What it does |
|---|---|---|
--sca | SCA | Dependency vulnerability scan — OSV, NVD, GHSA, CISA KEV, EPSS. Default if no engine flag is given. |
--sast | SAST | Static code analysis across 9 languages with 300+ rules, taint tracking and sink detection. |
--secrets | Secrets | IaC and config file secrets scan — API keys, tokens, passwords in dotfiles, YAML, etc. |
--ai-risk | AI Risk | AI-assisted code attribution and context profiling. Requires --sca. |
--attack-paths | Attack Paths | Graph-based exploit path from vulnerable dependency to code sink. Requires --sca --sast. |
--all | All engines | Enable SCA + SAST + Secrets + AI Risk. |
Scan targets
| Flag / Arg | Target type |
|---|---|
dpndncy scan . | Scan current working directory (local path) |
dpndncy scan /path/to/project | Scan a specific local directory |
--zip <file> | Upload a zip, jar, war, or tar archive |
--repo <url> | Scan a GitHub or GitLab repository by URL (cloned server-side) |
--image <ref> | Scan a container image — registry ref or local tarball |
Output options
| Flag | Behaviour |
|---|---|
--json | Print raw JSON results to stdout — suitable for scripting and parsing |
--ci | CI mode: minimal output, exit code 1 on policy fail, exit code 2 on error |
--output <file> | Write JSON results to a file instead of (or in addition to) stdout |
Exit codes
| Code | Meaning |
|---|---|
0 | Scan complete — policy passed (or no policy configured) |
1 | Policy FAIL — vulnerabilities exceeded defined thresholds |
2 | Scan error — connection failure, bad token, or scan engine error |
Examples
# SCA-only scan on current directory (default)
dpndncy scan .
# Full scan — all engines
dpndncy scan --all /path/to/project
# SCA + SAST + Secrets
dpndncy scan --sca --sast --secrets .
# Scan a zip archive
dpndncy scan --zip build/app.jar
# Scan a GitHub repo
dpndncy scan --repo https://github.com/org/repo --sca --sast
# Scan a container image from registry
dpndncy scan --image nginx:latest
# CI mode — fails build on policy violation
dpndncy scan --ci --sca . && echo "Build OK"
# Output JSON to file
dpndncy scan --sca --json --output results.json .
Login & Status Commands
dpndncy login
Save your server URL and API token. Stored in ~/.dpndncy/config.json. Only needs to be run once per machine.
dpndncy login --server https://sca.yourcompany.com --token dpat_xxxxxxxxxxxxxxxx
Generate your token in the dpndncY web UI under Profile → API Tokens. Token format is dpat_ followed by a random string.
dpndncy status
Checks connectivity to your configured server and prints server version, auth status, and queue depth.
dpndncy status
# Or override the server for a one-off check
dpndncy status --server https://sca.yourcompany.com --token dpat_...
CI/CD with the CLI
The --ci flag enables minimal output mode and makes the CLI return exit code 1 on policy violations — perfect for pipeline gates. Store your server URL and token as CI secrets, then download the CLI binary in your pipeline.
The CLI handles polling, retry on connection errors, and progress reporting automatically. It's simpler and more reliable than hand-rolling curl polling loops.
GitHub Actions — using the CLI
.github/workflows/security.ymlname: Security Gate
on:
push:
branches: [main, develop]
pull_request:
jobs:
dpndncy-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download dpndncY CLI
run: |
curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux \
-o /usr/local/bin/dpndncy
chmod +x /usr/local/bin/dpndncy
- name: Configure dpndncY
run: dpndncy login --server ${{ secrets.DPNDNCY_URL }} --token ${{ secrets.DPNDNCY_TOKEN }}
- name: Run security scan
run: dpndncy scan --ci --sca --sast .
# Exit 0 = passed, Exit 1 = policy fail (fails the build), Exit 2 = error
GitLab CI — using the CLI
.gitlab-ci.ymlsecurity-scan:
stage: test
image: ubuntu:22.04
before_script:
- apt-get update -q && apt-get install -yq curl
- curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux
-o /usr/local/bin/dpndncy
- chmod +x /usr/local/bin/dpndncy
- dpndncy login --server $DPNDNCY_URL --token $DPNDNCY_TOKEN
script:
- dpndncy scan --ci --sca --sast $CI_PROJECT_DIR
variables:
DPNDNCY_URL: https://sca.yourcompany.com
Jenkins Pipeline — using the CLI
Jenkinsfilepipeline {
agent any
environment {
DPNDNCY_URL = credentials('dpndncy-url')
DPNDNCY_TOKEN = credentials('dpndncy-token')
}
stages {
stage('Security Scan') {
steps {
sh '''
curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux \
-o /tmp/dpndncy && chmod +x /tmp/dpndncy
/tmp/dpndncy login --server $DPNDNCY_URL --token $DPNDNCY_TOKEN
/tmp/dpndncy scan --ci --sca --sast .
'''
}
}
}
}
Windows CI (PowerShell / Azure DevOps)
azure-pipelines.yml- task: PowerShell@2
displayName: 'dpndncY Security Scan'
inputs:
targetType: inline
script: |
$url = "https://github.com/dpndncy/cli/releases/latest/download/dpndncy-win.exe"
Invoke-WebRequest -Uri $url -OutFile "dpndncy.exe"
.\dpndncy.exe login --server $env:DPNDNCY_URL --token $env:DPNDNCY_TOKEN
.\dpndncy.exe scan --ci --sca .
env:
DPNDNCY_URL: $(dpndncyUrl)
DPNDNCY_TOKEN: $(dpndncyToken)
For faster pipelines, cache the CLI binary using your CI platform's cache action (GitHub Actions actions/cache, GitLab cache, etc.) keyed on the CLI version number. The binary is ~15 MB compressed and does not change between runs.
GitHub Integration
Connect dpndncY to GitHub to browse repositories and open automated remediation pull requests for vulnerable dependencies.
Setup
- Create a GitHub Personal Access Token with
reposcope (or a fine-grained token with read/write on contents and pull requests) - Set
GITHUB_TOKENin your configuration and restart the service - Verify the connection: Settings → Integrations → GitHub
Remediation PRs
From any scan result, select affected packages and click Open Remediation PR. dpndncY creates a branch, bumps the vulnerable dependency to the patched version in the manifest and lock file, and opens a PR with full CVE context in the description.
GitLab Integration
Same capabilities as GitHub: repository browsing and automated Merge Requests for vulnerability remediation.
Setup
- Create a GitLab Personal Access Token with
apiscope - Set
GITLAB_TOKEN(andGITLAB_URLfor self-hosted instances) in configuration - Restart the service
VS Code Extension
The dpndncY VS Code extension shows vulnerability data inline in your manifest files. Vulnerable packages are underlined with severity indicators — hover for CVE detail, CVSS score, and recommended fix version.
Installation
- Download
dpndncy-security-*.vsixfrom your dpndncY instance: Settings → VS Code Extension - In VS Code: Extensions → ⋯ → Install from VSIX…
Settings
| Setting | Description |
|---|---|
dpndncy.serverUrl | URL of your dpndncY instance, e.g. https://sca.yourcompany.com |
dpndncy.apiToken | Personal API Token (generate from Profile → API Tokens) |
dpndncy.minSeverity | Minimum severity to show: LOW / MEDIUM / HIGH / CRITICAL |
dpndncy.autoScan | Scan on file save (default: false) |
Notifications
| Channel | Configuration |
|---|---|
| Slack | Set SLACK_WEBHOOK_URL to a Slack Incoming Webhook URL. Notifications sent on scan completion and policy failure. |
| Discord | Set DISCORD_WEBHOOK_URL to a Discord webhook URL. |
| Configure SMTP settings. Emails sent for scan completion, policy failures, and new CRITICAL vulnerabilities. | |
| Custom webhook | POST /api/webhooks — register any HTTP endpoint to receive JSON payloads for scan events. Supports HMAC request signing. |
SSO / OIDC
dpndncY supports OIDC-based SSO with Okta, Azure AD, Auth0, and any OIDC-compliant identity provider. Users are provisioned automatically on first login. Role assignment is controlled via OIDC group claims.
OIDC_ISSUER=https://yourorg.okta.com/oauth2/default
OIDC_CLIENT_ID=0oa1b2c3d4e5
OIDC_CLIENT_SECRET=your-client-secret
OIDC_CALLBACK_URL=https://sca.yourcompany.com/auth/oidc/callback
When configured, a Sign in with SSO button appears on the login page. Password-based login for local accounts can be disabled from Settings → Authentication.
User Management
| Role | Capabilities |
|---|---|
| Admin | Full access: manage users, integrations, settings, all scans, tokens, audit log |
| Viewer | Read-only: view scan results, findings, SBOM exports. Can call /api/packages/check. Cannot start scans or change settings. |
Manage users at Settings → Users or via API:
POST /api/admin/users
Authorization: Bearer <admin-token>
{ "email": "engineer@yourcompany.com", "role": "viewer" }
API Tokens (PAT)
- Tokens are scoped to the permissions of the creating user
- The token value is shown once only — store it immediately in your secrets manager
- Create separate tokens per integration (CI, VS Code, monitoring) for independent revocation
- Revocation is instant — use Profile → API Tokens or
DELETE /api/tokens/:id - Audit token usage from Settings → Audit Log
Backup & Restore
All persistent state is in the SQLite database and the scan snapshot directory. Both live on the data volume.
Docker backup
# Backup the data volume to a tar archive
docker run --rm \
-v dpndncy_data:/data \
-v $(pwd)/backups:/backups \
alpine tar czf /backups/dpndncy-$(date +%Y%m%d).tar.gz /data
# Restore
docker run --rm \
-v dpndncy_data:/data \
-v $(pwd)/backups:/backups \
alpine tar xzf /backups/dpndncy-20260309.tar.gz -C /
Kubernetes backup
Backup the PVC using your cluster's volume snapshot mechanism (e.g., Velero, CSI snapshots):
velero backup create dpndncy-backup \
--include-namespaces security \
--wait
Windows backup
Stop the service, copy C:\ProgramData\dpndncY\ to a backup location, then restart:
net stop dpndncy
robocopy "C:\ProgramData\dpndncY" "D:\Backups\dpndncy-%date%" /E /COPYALL
net start dpndncy
Upgrades
dpndncY applies database migrations automatically on startup. Always back up your data before upgrading.
Docker
# Pull the new image and recreate the container (data volume is preserved)
docker compose pull
docker compose up -d
Kubernetes
helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
--namespace security \
-f values-prod.yaml
Windows
Run the new version's .exe installer over the existing installation. The installer handles the service stop/start and data migration automatically.
Read the release notes for the new version. Major versions may include breaking API or configuration changes. Back up your data volume before running any upgrade.
Troubleshooting
Container won't start
- Check that
JWT_SECRETis set and non-empty - Verify the data volume is mounted and writable by the container process
- Check logs:
docker compose logs dpndncy
Scans return no findings
- Verify the target path is mounted into the container and accessible
- Check that a supported manifest file exists in the target directory
- Ensure outbound HTTPS to
api.osv.devandservices.nvd.nist.govis allowed by your firewall/proxy
SAST scan times out
- Increase
SAST_MAX_RUNTIME_SEC(e.g.600for large monorepos) - Use
deltaOnly: trueto limit analysis to changed files - Ensure the container has sufficient CPU — SAST is CPU-bound
SSO / OIDC login fails
- Verify
OIDC_CALLBACK_URLmatches exactly what's registered in your IdP (including trailing slash if any) - Check that the dpndncY instance is reachable at the callback URL from the browser, not just from the server
- Review the server log for the OIDC error response detail
Windows Service won't start
- Check the Windows Event Viewer: Application → dpndncY
- Verify the service account has read/write access to
C:\ProgramData\dpndncY - Check that the configured port is not in use by another process:
netstat -ano | findstr :3000
Viewing logs
# Docker
docker compose logs -f dpndncy
# Kubernetes
kubectl logs -n security -l app=dpndncy -f
# Windows
Get-EventLog -LogName Application -Source dpndncY -Newest 100
Licensed customers have access to priority support. Contact support@dpndncy.dev with your instance ID (visible in Settings → About) and log output.