Software supply chain security,
built for enterprise teams
dpndncY is an enterprise-grade, self-hosted software composition analysis (SCA) and static application security testing (SAST) platform. It scans dependencies and source code for vulnerabilities, computes real-world exploitability signals, maps attack paths, and enforces security policy — all within your own infrastructure perimeter.
Deployable as a Docker container, a Kubernetes workload via Helm, or a Windows Server installer — no developer toolchain required on the host.
Architecture
dpndncY runs as a single containerized process backed by an embedded SQLite database. There are no external service dependencies — no Redis, no Postgres, no message queue — making deployment simple and operations lightweight.
| Component | Description |
|---|---|
| HTTP API Server | Express-based REST API. All scan orchestration, authentication, and data access. |
| Embedded Database | SQLite via better-sqlite3. Schema auto-migrates on startup. Persisted via volume mount. |
| SCA Engine | Dependency manifest parser + multi-source vulnerability enrichment (OSV, NVD, GHSA, CISA KEV, EPSS). |
| SAST Engine | Proprietary 300+ rule engine: JS/TS taint analysis, Python AST analysis, multi-language pattern scanner. All run in-process. |
| Attack Path Engine | Graph builder, path finder, CWE-to-CVE correlation, exploitability scorer. |
| Web Frontend | Single-page application served as static files from the container. |
During scans, dpndncY queries external vulnerability databases (OSV.dev, NVD, GHSA) over HTTPS. Only package names, versions, and hashes are transmitted — never source code. All scan results, findings, and metadata remain exclusively inside your environment.
Quick Start
The fastest path to a running instance is Docker Compose. Copy the snippet below, fill in three environment values, and you're up.
docker-compose.ymlversion: "3.8"
services:
dpndncy:
image: dpndncy/platform:2.7.0
restart: unless-stopped
ports:
- "3000:3000"
environment:
JWT_SECRET: "change-to-a-long-random-string"
ADMIN_EMAIL: "admin@yourcompany.com"
ADMIN_PASSWORD: "change-me-on-first-login"
volumes:
- dpndncy_data:/app/data
volumes:
dpndncy_data:
docker compose up -d
# Open http://dpndncy.com — log in with ADMIN_EMAIL / ADMIN_PASSWORD
See Docker deployment for production-hardened configuration, or Kubernetes + Helm for enterprise-scale deployments.
Docker Deployment
Docker Compose is the recommended deployment method for teams that want a production-ready instance without Kubernetes overhead. Requires Docker Engine 20.10+ and Docker Compose v2.
Production docker-compose.yml
docker-compose.ymlversion: "3.8"
services:
dpndncy:
image: dpndncy/platform:2.7.0
restart: unless-stopped
ports:
- "127.0.0.1:3000:3000" # Bind to localhost; expose via reverse proxy
environment:
NODE_ENV: production
JWT_SECRET: "${JWT_SECRET}"
ADMIN_EMAIL: "${ADMIN_EMAIL}"
ADMIN_PASSWORD: "${ADMIN_PASSWORD}"
SESSION_DURATION: "8h"
# GitHub integration (optional)
GITHUB_TOKEN: "${GITHUB_TOKEN}"
# Email notifications (optional)
SMTP_HOST: "${SMTP_HOST}"
SMTP_PORT: "587"
SMTP_USER: "${SMTP_USER}"
SMTP_PASS: "${SMTP_PASS}"
volumes:
- dpndncy_data:/app/data
- dpndncy_scans:/app/data/scans # Scan history & snapshots
healthcheck:
test: ["CMD", "wget", "-qO-", "http://dpndncy.com/api/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
volumes:
dpndncy_data:
dpndncy_scans:
Store secrets in a .env file alongside the compose file (never commit it to source control):
JWT_SECRET=your-very-long-random-secret-64-chars-minimum
ADMIN_EMAIL=admin@yourcompany.com
ADMIN_PASSWORD=initial-password-change-on-first-login
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_USER=dpndncy-alerts@yourcompany.com
SMTP_PASS=your-smtp-password
Reverse proxy with nginx
In production, run dpndncY behind nginx or another reverse proxy for TLS termination:
/etc/nginx/sites-available/dpndncyserver {
listen 443 ssl http2;
server_name sca.yourcompany.com;
ssl_certificate /etc/ssl/certs/sca.yourcompany.com.crt;
ssl_certificate_key /etc/ssl/private/sca.yourcompany.com.key;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 120s;
client_max_body_size 50m;
}
}
server {
listen 80;
server_name sca.yourcompany.com;
return 301 https://$host$request_uri;
}
Starting and managing the service
# Start in background
docker compose up -d
# View logs
docker compose logs -f dpndncy
# Restart after config change
docker compose restart dpndncy
# Stop
docker compose down
# Update to a new version (data persists in named volumes)
docker compose pull
docker compose up -d
Kubernetes + Helm
The dpndncY Helm chart deploys the platform as a Kubernetes Deployment with a PersistentVolumeClaim for data, a Service, and an optional Ingress resource. Requires Kubernetes 1.24+ and Helm 3.x.
Add the Helm repository
helm repo add dpndncy https://charts.dpndncy.dev
helm repo update
Install with minimal configuration
helm install dpndncy dpndncy/dpndncy-platform \
--namespace security \
--create-namespace \
--set auth.jwtSecret="your-secret-here" \
--set admin.email="admin@yourcompany.com" \
--set admin.password="initial-password"
Production values file
values-prod.yamlreplicaCount: 2
image:
repository: dpndncy/platform
tag: "2.7.0"
pullPolicy: IfNotPresent
auth:
jwtSecret: "" # Set via --set or secretRef
sessionDuration: "8h"
admin:
email: "admin@yourcompany.com"
password: "" # Set via --set or secretRef
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: sca.yourcompany.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: dpndncy-tls
hosts:
- sca.yourcompany.com
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
github:
token: "" # GITHUB_TOKEN — set via secretRef
smtp:
host: "smtp.yourcompany.com"
port: 587
user: ""
pass: ""
sso:
oidcIssuer: ""
clientId: ""
clientSecret: ""
helm install dpndncy dpndncy/dpndncy-platform \
--namespace security \
--create-namespace \
-f values-prod.yaml \
--set auth.jwtSecret="$(openssl rand -hex 32)" \
--set admin.password="your-initial-password"
Secrets management
For production, store sensitive values as Kubernetes Secrets and reference them via the chart's existingSecret option rather than passing them as Helm values:
kubectl create secret generic dpndncy-secrets \
--namespace security \
--from-literal=jwtSecret="$(openssl rand -hex 32)" \
--from-literal=adminPassword="your-password" \
--from-literal=githubToken="ghp_xxxx"
# In values-prod.yaml:
existingSecret: dpndncy-secrets
Upgrade
helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
--namespace security \
-f values-prod.yaml \
--set auth.jwtSecret="existing-secret"
The PVC is not deleted on helm upgrade or helm uninstall by default. Your scan history and database are retained across version upgrades.
Windows Installer
dpndncY is available as a signed Windows installer (.exe) for deployment on Windows Server 2019 or later. The installer handles all dependencies and installs dpndncY as a Windows Service that starts automatically with the OS.
Prerequisites
- Windows Server 2019, 2022, or Windows 10/11 (64-bit)
- 4 GB RAM minimum; 8 GB recommended
- Administrator privileges for installation
- Outbound HTTPS (port 443) to OSV.dev, NVD, and GHSA for vulnerability data
- No Node.js, Python, or other runtime required — all bundled in the installer
Installation steps
Download the installer
Download dpndncY-Setup-2.7.0-x64.exe from your licensed download portal or request it via License Request.
Run as Administrator
Right-click the installer → Run as administrator. The setup wizard will launch.
Configure the installation
The wizard will ask for:
- Install directory (default:
C:\Program Files\dpndncY) - Data directory (default:
C:\ProgramData\dpndncY) — keep on a drive with sufficient space - HTTP port (default:
3000) - Admin email and initial password
- JWT secret — auto-generated if left blank
- Service account — by default runs as
NT AUTHORITY\NetworkService
Complete installation
Click Install. The wizard will install all bundled runtimes, configure the service, and open the firewall rule for the selected port.
Access the platform
Open http://dpndncy.com (or the configured port) in a browser. Log in with the admin credentials you set during installation.
Managing the Windows Service
# View service status
sc query dpndncy
# Stop / Start / Restart
net stop dpndncy
net start dpndncy
# Or via Services MMC (services.msc) — look for "dpndncY Platform"
Post-install configuration
After installation, edit the configuration file at C:\ProgramData\dpndncY\config.env to add integration credentials (GitHub token, SMTP settings, OIDC, etc.), then restart the service.
# C:\ProgramData\dpndncY\config.env
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_PORT=587
SMTP_USER=dpndncy@yourcompany.com
SMTP_PASS=your-smtp-password
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx
Uninstallation
Use Control Panel → Programs → Uninstall a program → dpndncY. The uninstaller removes the service and binaries. The data directory (C:\ProgramData\dpndncY) is preserved — delete it manually if you want to remove all data.
Upgrade
Run the new version's installer over the existing installation. The installer detects the previous version, stops the service, updates the binaries, and restarts — your data and configuration are preserved.
System Requirements
| Deployment | Minimum | Recommended (production) |
|---|---|---|
| Docker | Docker Engine 20.10, 2 vCPU, 4 GB RAM, 20 GB disk | 4 vCPU, 8 GB RAM, 100 GB disk (for large scan histories) |
| Kubernetes | K8s 1.24+, 2 vCPU, 4 GB RAM per pod, 20 GB PVC | 4 vCPU, 8 GB RAM, 100 GB PVC; 2 replicas |
| Windows Installer | Windows Server 2019, 4 vCPU, 4 GB RAM, 20 GB | 8 vCPU, 8 GB RAM, 100 GB on a dedicated drive |
| Requirement | Detail |
|---|---|
| Outbound HTTPS | Port 443 to api.osv.dev, services.nvd.nist.gov, api.github.com, api.first.org (EPSS) |
| Inbound HTTP | Port 3000 (configurable). Expose via reverse proxy with TLS for production. |
| Storage I/O | SSD or NVMe recommended for the data volume. SQLite is write-heavy during large scans. |
| No runtime dependencies | All runtimes (Node.js, Python) are bundled inside the container / Windows installer. The host only needs Docker or Windows Server. |
Configuration Reference
Configuration is provided via environment variables. In Docker deployments, set them in docker-compose.yml or a .env file. In Kubernetes, use Helm values or a Secret. In the Windows installer, edit C:\ProgramData\dpndncY\config.env.
Core
| Variable | Default | Description |
|---|---|---|
JWT_SECRET | required | Secret for signing session tokens. Minimum 32 random characters. Rotate this to invalidate all active sessions. |
PORT | 3000 | HTTP port the server binds to |
NODE_ENV | production | Set to production. Enables secure cookie flags and disables debug output. |
SESSION_DURATION | 8h | Validity period for browser session tokens (e.g. 4h, 1d) |
DB_PATH | /app/data/dpndncy.db | Path to the SQLite database file. Must be on a persistent volume. |
Admin account
| Variable | Description |
|---|---|
ADMIN_EMAIL | Email for the default admin account, created on first startup only |
ADMIN_PASSWORD | Initial password. Change immediately after first login via Profile → Change Password |
SAST engine tuning
| Variable | Default | Description |
|---|---|---|
SAST_MAX_RUNTIME_SEC | 300 | Max wall-clock seconds per SAST scan before forced timeout. Increase for large monorepos. |
SAST_STORAGE_PATH | /app/data/sast | Directory for SARIF output files |
Email (SMTP)
| Variable | Description |
|---|---|
SMTP_HOST | SMTP relay hostname (e.g. smtp.office365.com, smtp.gmail.com) |
SMTP_PORT | 587 for STARTTLS (recommended), 465 for implicit TLS, 25 for unauthenticated relay |
SMTP_USER | SMTP authentication username |
SMTP_PASS | SMTP password or app password |
SMTP_FROM | From address for notifications (e.g. dpndncy@yourcompany.com) |
Integrations
| Variable | Description |
|---|---|
GITHUB_TOKEN | GitHub PAT with repo scope — enables repository listing and remediation PRs |
GITLAB_TOKEN | GitLab PAT with api scope |
GITLAB_URL | GitLab instance URL (default: https://gitlab.com). Set for self-hosted GitLab. |
SLACK_WEBHOOK_URL | Slack incoming webhook URL for scan notifications |
DISCORD_WEBHOOK_URL | Discord webhook URL |
OIDC_ISSUER | OIDC issuer URL (Okta, Azure AD, Auth0) |
OIDC_CLIENT_ID | OIDC client ID |
OIDC_CLIENT_SECRET | OIDC client secret |
OIDC_CALLBACK_URL | Full callback URL, e.g. https://sca.yourcompany.com/auth/oidc/callback |
First-Time Setup
Log in with the admin account
Open the platform URL in a browser. Log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured.
Change the admin password
Go to Profile → Change Password. The initial password is a placeholder — change it immediately to a strong credential.
Connect your source code repositories
Go to Settings → Integrations and connect GitHub or GitLab. This enables repository browsing and remediation PRs/MRs.
Run your first scan
Go to Scans → New Scan. Select a repository or enter a local path (accessible from the container). Click Start Scan.
Invite team members
Go to Settings → Users → Invite User. Assign the viewer role for read-only access or admin for full access. Or configure SSO for automatic provisioning.
Configure CI/CD integration
Generate a Personal API Token and add it to your CI/CD pipeline secrets. Use the CI/CD examples to add security gates to your pipelines.
SCA Scanning
Software Composition Analysis (SCA) scans dependency manifests, resolves the full dependency tree, and checks each package against multiple vulnerability databases. Results are enriched with real-world exploitability signals to help teams prioritize what actually matters.
How it works
- dpndncY traverses the target directory for supported manifest and lock files
- Dependency trees are parsed — direct and transitive dependencies at exact resolved versions
- Package identifiers are queried against OSV, NVD, GHSA, CISA KEV, and EPSS
- Findings are enriched with CVSS scores, EPSS probability, KEV status, and ExploitDB references
- A composite risk score is computed per vulnerability combining all signals
- Results are stored and surfaced in the UI with remediation guidance and upgrade paths
Supported Ecosystems
| Ecosystem | Manifest files detected | Lock file support |
|---|---|---|
| npm / Node.js | package.json | package-lock.json, yarn.lock, pnpm-lock.yaml |
| Python | requirements.txt, Pipfile, pyproject.toml | Pipfile.lock, poetry.lock |
| Java / Maven | pom.xml | — |
| Java / Gradle | build.gradle, build.gradle.kts | gradle.lockfile |
| Go | go.mod | go.sum |
| .NET / NuGet | *.csproj, packages.config | packages.lock.json |
| Ruby | Gemfile | Gemfile.lock |
| PHP / Composer | composer.json | composer.lock |
| Rust / Cargo | Cargo.toml | Cargo.lock |
Lock files are used when present. They contain exact resolved versions for the entire dependency tree, resulting in more accurate CVE matching than manifest files alone.
Vulnerability Sources
| Source | Data provided |
|---|---|
| OSV.dev | Open source vulnerability database (Google). Primary advisory source for npm, PyPI, Maven, Go, NuGet, Cargo, RubyGems. |
| NVD | NIST National Vulnerability Database. CVSS v3.1 base scores and vector strings. |
| GHSA | GitHub Security Advisories. Earlier disclosure, ecosystem-enriched detail. |
| CISA KEV | CISA Known Exploited Vulnerabilities catalog. Any CVE here is actively exploited in the wild — highest priority. |
| EPSS | Exploit Prediction Scoring System (FIRST.org). 0–1 probability of exploitation in the next 30 days. |
| ExploitDB | Public exploit code database. Presence of working exploit code amplifies severity. |
SAST Scanning
The dpndncY SAST engine performs static analysis on your source code using three parallel analyzers. No external SAST tool installation is required — all analysis runs inside the container.
| Analyzer | Languages | Method |
|---|---|---|
| Taint Analyzer | JavaScript, TypeScript | Intra-function data flow tracking from user-controlled sources to dangerous sinks, with call graph resolution up to depth 5 |
| AST Analyzer | Python | Python stdlib AST-based taint analysis, executed in an isolated subprocess |
| Pattern Analyzer | All 9 languages + secrets | 300+ regex/AST patterns covering injection, crypto misuse, secrets, insecure APIs |
Starting a SAST scan via API
POST /api/sast/scan
Authorization: Bearer <token>
Content-Type: application/json
{
"repoPath": "/mnt/repos/myapp",
"branch": "feature/payment-refactor",
"baseBranch": "main",
"deltaOnly": true
}
Scans run asynchronously. Poll the run status:
GET /api/sast/runs/:runId
# status: "pending" | "running" | "completed" | "failed"
Supported Languages
| Language | Rules | Analysis depth |
|---|---|---|
| JavaScript / TypeScript | 80+ | Full taint tracking with call graph, source/sink/sanitizer detection |
| Python | 55+ | AST taint analysis — subprocess/exec/eval/deserialization sinks |
| Java | 35+ | SQL injection, XXE, SSRF, insecure deserialization, path traversal |
| C# | 25+ | SQL injection, LDAP injection, XSS, insecure cryptography |
| Go | 25+ | Command injection, SSRF, path traversal, weak crypto |
| PHP | 25+ | SQL injection, XSS, eval injection, file inclusion, SSRF |
| Ruby | 20+ | SQL injection, command injection, mass assignment, SSRF |
| C / C++ | 15+ | Buffer overflows, format strings, unsafe functions |
| Secrets (all files) | 20+ | API keys, tokens, private keys, connection strings |
Rule Engine
Each rule defines an id, severity (CRITICAL HIGH MEDIUM LOW), confidence (HIGH for taint-confirmed, MEDIUM for pattern-matched), associated CWEs, and remediation guidance.
CWE identifiers in SAST findings are correlated with CVEs in SCA results to compute Attack Path boosts — a SAST finding in the same package as a CVE with a matching CWE scores 1.3× higher.
Suppressing findings
POST /api/sast/runs/:runId/suppress
Authorization: Bearer <token>
{
"findingId": "uuid",
"reason": "False positive — input validated by middleware"
}
Suppressed findings remain in the audit log but are excluded from policy evaluation and dashboard counts.
Attack Path Graph
The Attack Path Graph maps how an attacker could move from a vulnerable dependency through your codebase to a reachable entry point. It combines SCA vulnerability data with SAST code findings and import resolution to produce scored, prioritized attack chains.
Path score formula
score = depRiskScore × reachabilityWeight × sinkWeight × aiAmplification × cweBoost
depRiskScore → CVSS + EPSS composite (0–10)
reachabilityWeight → 1.0 imported / 1.3 called directly
sinkWeight → 1.5 SQL/exec, 1.3 path/SSRF, 1.0 log
aiAmplification → 1.0–1.2 based on AI risk context
cweBoost → 1.3× when SAST CWE matches CVE CWE
Range: [0, 2.0]
API
GET /api/scans/:id/attack-graph # Full graph (nodes + edges)
GET /api/scans/:id/attack-path/:pathId # Path detail + narrative explanation
Policy Engine
Define security policies to gate CI/CD pipelines. Policies evaluate findings against thresholds, blocked rules, and EPSS minimums. A failed policy returns a non-zero exit code that fails the build.
Policy configuration
{
"thresholds": {
"critical": 0, // fail if any CRITICAL findings
"high": 3,
"medium": null, // null = no limit
"low": null
},
"blockedRules": [
"JS-TAINT-SQL-001",
"PY-EXEC-001"
],
"deltaOnly": true, // only evaluate findings in changed lines (PR gate)
"minEpss": 0.4 // only count vulns with EPSS ≥ 0.4
}
Policy evaluation
POST /api/sast/policy/evaluate
{
"runId": "uuid",
"policy": { ... }
}
# Response:
{
"passed": false,
"violations": [
{ "rule": "critical threshold", "found": 2, "limit": 0 }
]
}
SBOM & Export
| Format | Endpoint | Use case |
|---|---|---|
| CycloneDX 1.4 JSON | GET /api/scans/:id/sbom | SBOM for compliance, procurement, auditors |
| SARIF 2.1.0 | GET /api/sast/runs/:id/sarif | SAST findings for GitHub Code Scanning, Azure DevOps, IDE plugins |
| CSV | GET /api/scans/:id/export/csv | SCA findings for reporting, spreadsheet analysis |
Scan History & Trends
Each completed scan saves a snapshot. The trend engine compares consecutive snapshots to compute a risk delta: new findings, resolved findings, and change in composite risk score. Trend data powers the dashboard timeline chart.
GET /api/scans/:id/history # List historical snapshots for a project
GET /api/scans/:id/trend # Risk delta between last 2 snapshots
AI Risk Detection
dpndncY flags AI/ML-specific security risks in addition to standard vulnerabilities:
- Model loading via insecure deserialization (
pickle, unsafetorch.load) - LLM prompt injection surface in AI framework integrations
- Model supply chain risks: packages that download models from unverified registries at runtime
- Training data exposure via logging, serialization, or external API calls
Authentication
| Type | Lifetime | Use case |
|---|---|---|
| Session token | 8h (configurable) | Browser UI. Issued on login, stored as HTTP-only cookie. |
| Personal API Token (PAT) | 1 year (configurable) | CI/CD pipelines, VS Code extension, API scripts. Passed as Authorization: Bearer <token>. |
Creating a PAT
Via UI: Profile → API Tokens → Create Token
POST /api/tokens
Authorization: Bearer <session-token>
{ "name": "GitHub Actions", "expiresIn": "365d" }
# Save the returned token value — shown only once
API Reference
Base URL: https://sca.yourcompany.com. All endpoints require Authorization: Bearer <token> unless noted.
Scans (SCA)
{ "repoPath": "/path/or/git-url", "branch": "main", "label": "optional" }SAST
{ "repoPath": "/path", "branch": "feat/x", "baseBranch": "main", "deltaOnly": true }Packages (VS Code / quick check)
{ "packages": [{ "name": "lodash", "version": "4.17.15", "ecosystem": "npm" }] }Tokens
CI/CD Integration
Use a Personal API Token to add dpndncY security gates to your pipeline. The typical pattern: scan → poll until complete → evaluate policy → fail build on violation.
GitHub Actions
.github/workflows/security.ymlname: Security Gate
on: [push, pull_request]
jobs:
dpndncy-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: SCA Scan
id: sca
run: |
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}"}')
echo "scan_id=$(echo $RESULT | jq -r .id)" >> $GITHUB_OUTPUT
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
- name: SAST Scan
id: sast
run: |
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}","baseBranch":"main","deltaOnly":true}')
RUN_ID=$(echo $RESULT | jq -r .runId)
# Poll until complete
for i in $(seq 1 30); do
STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" | jq -r .status)
[ "$STATUS" = "completed" ] && break
sleep 10
done
echo "run_id=$RUN_ID" >> $GITHUB_OUTPUT
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
- name: Policy Gate
run: |
POLICY='{"thresholds":{"critical":0,"high":5},"deltaOnly":true}'
RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
-H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"runId\":\"${{ steps.sast.outputs.run_id }}\",\"policy\":$POLICY}")
echo $RESULT | jq .
echo $RESULT | jq -e '.passed == true'
env:
DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}
GitLab CI
.gitlab-ci.ymlsecurity-scan:
stage: test
image: curlimages/curl:latest
script:
- |
SCAN=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\"}")
SCAN_ID=$(echo $SCAN | grep -o '"id":"[^"]*"' | cut -d'"' -f4)
SAST=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\",\"deltaOnly\":true}")
RUN_ID=$(echo $SAST | grep -o '"runId":"[^"]*"' | cut -d'"' -f4)
for i in $(seq 1 30); do
STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
-H "Authorization: Bearer $DPNDNCY_TOKEN" | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
[ "$STATUS" = "completed" ] && break
sleep 10
done
POLICY_RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
-H "Authorization: Bearer $DPNDNCY_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"runId\":\"$RUN_ID\",\"policy\":{\"thresholds\":{\"critical\":0},\"deltaOnly\":true}}")
echo $POLICY_RESULT | grep -q '"passed":true' || (echo "Security policy failed" && exit 1)
variables:
DPNDNCY_URL: https://sca.yourcompany.com
In GitHub: Repository → Settings → Secrets → Actions. In GitLab: Settings → CI/CD → Variables. Mark them as protected and masked.
GitHub Integration
Connect dpndncY to GitHub to browse repositories and open automated remediation pull requests for vulnerable dependencies.
Setup
- Create a GitHub Personal Access Token with
reposcope (or a fine-grained token with read/write on contents and pull requests) - Set
GITHUB_TOKENin your configuration and restart the service - Verify the connection: Settings → Integrations → GitHub
Remediation PRs
From any scan result, select affected packages and click Open Remediation PR. dpndncY creates a branch, bumps the vulnerable dependency to the patched version in the manifest and lock file, and opens a PR with full CVE context in the description.
GitLab Integration
Same capabilities as GitHub: repository browsing and automated Merge Requests for vulnerability remediation.
Setup
- Create a GitLab Personal Access Token with
apiscope - Set
GITLAB_TOKEN(andGITLAB_URLfor self-hosted instances) in configuration - Restart the service
VS Code Extension
The dpndncY VS Code extension shows vulnerability data inline in your manifest files. Vulnerable packages are underlined with severity indicators — hover for CVE detail, CVSS score, and recommended fix version.
Installation
- Download
dpndncy-security-*.vsixfrom your dpndncY instance: Settings → VS Code Extension - In VS Code: Extensions → ⋯ → Install from VSIX…
Settings
| Setting | Description |
|---|---|
dpndncy.serverUrl | URL of your dpndncY instance, e.g. https://sca.yourcompany.com |
dpndncy.apiToken | Personal API Token (generate from Profile → API Tokens) |
dpndncy.minSeverity | Minimum severity to show: LOW / MEDIUM / HIGH / CRITICAL |
dpndncy.autoScan | Scan on file save (default: false) |
Notifications
| Channel | Configuration |
|---|---|
| Slack | Set SLACK_WEBHOOK_URL to a Slack Incoming Webhook URL. Notifications sent on scan completion and policy failure. |
| Discord | Set DISCORD_WEBHOOK_URL to a Discord webhook URL. |
| Configure SMTP settings. Emails sent for scan completion, policy failures, and new CRITICAL vulnerabilities. | |
| Custom webhook | POST /api/webhooks — register any HTTP endpoint to receive JSON payloads for scan events. Supports HMAC request signing. |
SSO / OIDC
dpndncY supports OIDC-based SSO with Okta, Azure AD, Auth0, and any OIDC-compliant identity provider. Users are provisioned automatically on first login. Role assignment is controlled via OIDC group claims.
OIDC_ISSUER=https://yourorg.okta.com/oauth2/default
OIDC_CLIENT_ID=0oa1b2c3d4e5
OIDC_CLIENT_SECRET=your-client-secret
OIDC_CALLBACK_URL=https://sca.yourcompany.com/auth/oidc/callback
When configured, a Sign in with SSO button appears on the login page. Password-based login for local accounts can be disabled from Settings → Authentication.
User Management
| Role | Capabilities |
|---|---|
| Admin | Full access: manage users, integrations, settings, all scans, tokens, audit log |
| Viewer | Read-only: view scan results, findings, SBOM exports. Can call /api/packages/check. Cannot start scans or change settings. |
Manage users at Settings → Users or via API:
POST /api/admin/users
Authorization: Bearer <admin-token>
{ "email": "engineer@yourcompany.com", "role": "viewer" }
API Tokens (PAT)
- Tokens are scoped to the permissions of the creating user
- The token value is shown once only — store it immediately in your secrets manager
- Create separate tokens per integration (CI, VS Code, monitoring) for independent revocation
- Revocation is instant — use Profile → API Tokens or
DELETE /api/tokens/:id - Audit token usage from Settings → Audit Log
Backup & Restore
All persistent state is in the SQLite database and the scan snapshot directory. Both live on the data volume.
Docker backup
# Backup the data volume to a tar archive
docker run --rm \
-v dpndncy_data:/data \
-v $(pwd)/backups:/backups \
alpine tar czf /backups/dpndncy-$(date +%Y%m%d).tar.gz /data
# Restore
docker run --rm \
-v dpndncy_data:/data \
-v $(pwd)/backups:/backups \
alpine tar xzf /backups/dpndncy-20260309.tar.gz -C /
Kubernetes backup
Backup the PVC using your cluster's volume snapshot mechanism (e.g., Velero, CSI snapshots):
velero backup create dpndncy-backup \
--include-namespaces security \
--wait
Windows backup
Stop the service, copy C:\ProgramData\dpndncY\ to a backup location, then restart:
net stop dpndncy
robocopy "C:\ProgramData\dpndncY" "D:\Backups\dpndncy-%date%" /E /COPYALL
net start dpndncy
Upgrades
dpndncY applies database migrations automatically on startup. Always back up your data before upgrading.
Docker
# Pull the new image and recreate the container (data volume is preserved)
docker compose pull
docker compose up -d
Kubernetes
helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
--namespace security \
-f values-prod.yaml
Windows
Run the new version's .exe installer over the existing installation. The installer handles the service stop/start and data migration automatically.
Read the release notes for the new version. Major versions may include breaking API or configuration changes. Back up your data volume before running any upgrade.
Troubleshooting
Container won't start
- Check that
JWT_SECRETis set and non-empty - Verify the data volume is mounted and writable by the container process
- Check logs:
docker compose logs dpndncy
Scans return no findings
- Verify the target path is mounted into the container and accessible
- Check that a supported manifest file exists in the target directory
- Ensure outbound HTTPS to
api.osv.devandservices.nvd.nist.govis allowed by your firewall/proxy
SAST scan times out
- Increase
SAST_MAX_RUNTIME_SEC(e.g.600for large monorepos) - Use
deltaOnly: trueto limit analysis to changed files - Ensure the container has sufficient CPU — SAST is CPU-bound
SSO / OIDC login fails
- Verify
OIDC_CALLBACK_URLmatches exactly what's registered in your IdP (including trailing slash if any) - Check that the dpndncY instance is reachable at the callback URL from the browser, not just from the server
- Review the server log for the OIDC error response detail
Windows Service won't start
- Check the Windows Event Viewer: Application → dpndncY
- Verify the service account has read/write access to
C:\ProgramData\dpndncY - Check that the configured port is not in use by another process:
netstat -ano | findstr :3000
Viewing logs
# Docker
docker compose logs -f dpndncy
# Kubernetes
kubectl logs -n security -l app=dpndncy -f
# Windows
Get-EventLog -LogName Application -Source dpndncY -Newest 100
Licensed customers have access to priority support. Contact support@dpndncy.dev with your instance ID (visible in Settings → About) and log output.