dpndncY
dpndncY
Documentation
dpndncY Enterprise Docs

Software supply chain security,
built for enterprise teams

dpndncY is an enterprise-grade, self-hosted software composition analysis (SCA) and static application security testing (SAST) platform. It scans dependencies and source code for vulnerabilities, computes real-world exploitability signals, maps attack paths, and enforces security policy — all within your own infrastructure perimeter.

Deployable as a Docker container, a Kubernetes workload via Helm, or a Windows Server installer — no developer toolchain required on the host.

🐳
Docker Compose
One YAML file, one command. Recommended for teams without Kubernetes.
☸️
Kubernetes + Helm
Production-grade deployment with auto-scaling, persistent volumes, and ingress.
🪟
Windows Installer
MSI/EXE setup wizard for Windows Server 2019+. Installs as a Windows Service.
🔍
SCA + SAST
9 package ecosystems, 9 languages, 300+ SAST rules, AI risk context.
🕸️
Attack Paths
Graph-based exploit path analysis from vulnerable dependency to entry point.
⚙️
REST API
Full API for CI/CD pipelines, IDE plugins, and custom integrations.

Architecture

dpndncY runs as a single containerized process backed by an embedded SQLite database. There are no external service dependencies — no Redis, no Postgres, no message queue — making deployment simple and operations lightweight.

ComponentDescription
HTTP API ServerExpress-based REST API. All scan orchestration, authentication, and data access.
Embedded DatabaseSQLite via better-sqlite3. Schema auto-migrates on startup. Persisted via volume mount.
SCA EngineDependency manifest parser + multi-source vulnerability enrichment (OSV, NVD, GHSA, CISA KEV, EPSS).
SAST EngineProprietary 300+ rule engine: JS/TS taint analysis, Python AST analysis, multi-language pattern scanner. All run in-process.
Attack Path EngineGraph builder, path finder, CWE-to-CVE correlation, exploitability scorer.
Web FrontendSingle-page application served as static files from the container.
Data sovereignty

During scans, dpndncY queries external vulnerability databases (OSV.dev, NVD, GHSA) over HTTPS. Only package names, versions, and hashes are transmitted — never source code. All scan results, findings, and metadata remain exclusively inside your environment.

Quick Start

The fastest path to a running instance is Docker Compose. Copy the snippet below, fill in three environment values, and you're up.

docker-compose.yml
version: "3.8"
services:
  dpndncy:
    image: dpndncy/platform:2.7.0
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      JWT_SECRET: "change-to-a-long-random-string"
      ADMIN_EMAIL: "admin@yourcompany.com"
      ADMIN_PASSWORD: "change-me-on-first-login"
    volumes:
      - dpndncy_data:/app/data

volumes:
  dpndncy_data:
docker compose up -d
# Open http://dpndncy.com — log in with ADMIN_EMAIL / ADMIN_PASSWORD

See Docker deployment for production-hardened configuration, or Kubernetes + Helm for enterprise-scale deployments.


Docker Deployment

Docker Compose is the recommended deployment method for teams that want a production-ready instance without Kubernetes overhead. Requires Docker Engine 20.10+ and Docker Compose v2.

Production docker-compose.yml

docker-compose.yml
version: "3.8"

services:
  dpndncy:
    image: dpndncy/platform:2.7.0
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"    # Bind to localhost; expose via reverse proxy
    environment:
      NODE_ENV: production
      JWT_SECRET: "${JWT_SECRET}"
      ADMIN_EMAIL: "${ADMIN_EMAIL}"
      ADMIN_PASSWORD: "${ADMIN_PASSWORD}"
      SESSION_DURATION: "8h"
      # GitHub integration (optional)
      GITHUB_TOKEN: "${GITHUB_TOKEN}"
      # Email notifications (optional)
      SMTP_HOST: "${SMTP_HOST}"
      SMTP_PORT: "587"
      SMTP_USER: "${SMTP_USER}"
      SMTP_PASS: "${SMTP_PASS}"
    volumes:
      - dpndncy_data:/app/data
      - dpndncy_scans:/app/data/scans    # Scan history & snapshots
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://dpndncy.com/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "5"

volumes:
  dpndncy_data:
  dpndncy_scans:

Store secrets in a .env file alongside the compose file (never commit it to source control):

.env (not committed to git)
JWT_SECRET=your-very-long-random-secret-64-chars-minimum
ADMIN_EMAIL=admin@yourcompany.com
ADMIN_PASSWORD=initial-password-change-on-first-login
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_USER=dpndncy-alerts@yourcompany.com
SMTP_PASS=your-smtp-password

Reverse proxy with nginx

In production, run dpndncY behind nginx or another reverse proxy for TLS termination:

/etc/nginx/sites-available/dpndncy
server {
    listen 443 ssl http2;
    server_name sca.yourcompany.com;

    ssl_certificate     /etc/ssl/certs/sca.yourcompany.com.crt;
    ssl_certificate_key /etc/ssl/private/sca.yourcompany.com.key;

    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 120s;
        client_max_body_size 50m;
    }
}

server {
    listen 80;
    server_name sca.yourcompany.com;
    return 301 https://$host$request_uri;
}

Starting and managing the service

# Start in background
docker compose up -d

# View logs
docker compose logs -f dpndncy

# Restart after config change
docker compose restart dpndncy

# Stop
docker compose down

# Update to a new version (data persists in named volumes)
docker compose pull
docker compose up -d

Kubernetes + Helm

The dpndncY Helm chart deploys the platform as a Kubernetes Deployment with a PersistentVolumeClaim for data, a Service, and an optional Ingress resource. Requires Kubernetes 1.24+ and Helm 3.x.

Add the Helm repository

helm repo add dpndncy https://charts.dpndncy.dev
helm repo update

Install with minimal configuration

helm install dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  --create-namespace \
  --set auth.jwtSecret="your-secret-here" \
  --set admin.email="admin@yourcompany.com" \
  --set admin.password="initial-password"

Production values file

values-prod.yaml
replicaCount: 2

image:
  repository: dpndncy/platform
  tag: "2.7.0"
  pullPolicy: IfNotPresent

auth:
  jwtSecret: ""           # Set via --set or secretRef
  sessionDuration: "8h"

admin:
  email: "admin@yourcompany.com"
  password: ""           # Set via --set or secretRef

persistence:
  enabled: true
  storageClass: "standard"
  size: 20Gi

service:
  type: ClusterIP
  port: 3000

ingress:
  enabled: true
  className: "nginx"
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  hosts:
    - host: sca.yourcompany.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: dpndncy-tls
      hosts:
        - sca.yourcompany.com

resources:
  requests:
    memory: "512Mi"
    cpu: "250m"
  limits:
    memory: "2Gi"
    cpu: "1000m"

github:
  token: ""            # GITHUB_TOKEN — set via secretRef

smtp:
  host: "smtp.yourcompany.com"
  port: 587
  user: ""
  pass: ""

sso:
  oidcIssuer: ""
  clientId: ""
  clientSecret: ""
helm install dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  --create-namespace \
  -f values-prod.yaml \
  --set auth.jwtSecret="$(openssl rand -hex 32)" \
  --set admin.password="your-initial-password"

Secrets management

For production, store sensitive values as Kubernetes Secrets and reference them via the chart's existingSecret option rather than passing them as Helm values:

kubectl create secret generic dpndncy-secrets \
  --namespace security \
  --from-literal=jwtSecret="$(openssl rand -hex 32)" \
  --from-literal=adminPassword="your-password" \
  --from-literal=githubToken="ghp_xxxx"
# In values-prod.yaml:
existingSecret: dpndncy-secrets

Upgrade

helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  -f values-prod.yaml \
  --set auth.jwtSecret="existing-secret"
Persistent data on upgrade

The PVC is not deleted on helm upgrade or helm uninstall by default. Your scan history and database are retained across version upgrades.

Windows Installer

dpndncY is available as a signed Windows installer (.exe) for deployment on Windows Server 2019 or later. The installer handles all dependencies and installs dpndncY as a Windows Service that starts automatically with the OS.

Prerequisites

  • Windows Server 2019, 2022, or Windows 10/11 (64-bit)
  • 4 GB RAM minimum; 8 GB recommended
  • Administrator privileges for installation
  • Outbound HTTPS (port 443) to OSV.dev, NVD, and GHSA for vulnerability data
  • No Node.js, Python, or other runtime required — all bundled in the installer

Installation steps

1

Download the installer

Download dpndncY-Setup-2.7.0-x64.exe from your licensed download portal or request it via License Request.

2

Run as Administrator

Right-click the installer → Run as administrator. The setup wizard will launch.

3

Configure the installation

The wizard will ask for:

  • Install directory (default: C:\Program Files\dpndncY)
  • Data directory (default: C:\ProgramData\dpndncY) — keep on a drive with sufficient space
  • HTTP port (default: 3000)
  • Admin email and initial password
  • JWT secret — auto-generated if left blank
  • Service account — by default runs as NT AUTHORITY\NetworkService
4

Complete installation

Click Install. The wizard will install all bundled runtimes, configure the service, and open the firewall rule for the selected port.

5

Access the platform

Open http://dpndncy.com (or the configured port) in a browser. Log in with the admin credentials you set during installation.

Managing the Windows Service

# View service status
sc query dpndncy

# Stop / Start / Restart
net stop dpndncy
net start dpndncy

# Or via Services MMC (services.msc) — look for "dpndncY Platform"

Post-install configuration

After installation, edit the configuration file at C:\ProgramData\dpndncY\config.env to add integration credentials (GitHub token, SMTP settings, OIDC, etc.), then restart the service.

# C:\ProgramData\dpndncY\config.env
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_PORT=587
SMTP_USER=dpndncy@yourcompany.com
SMTP_PASS=your-smtp-password
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx

Uninstallation

Use Control Panel → Programs → Uninstall a program → dpndncY. The uninstaller removes the service and binaries. The data directory (C:\ProgramData\dpndncY) is preserved — delete it manually if you want to remove all data.

Upgrade

Run the new version's installer over the existing installation. The installer detects the previous version, stops the service, updates the binaries, and restarts — your data and configuration are preserved.

System Requirements

DeploymentMinimumRecommended (production)
DockerDocker Engine 20.10, 2 vCPU, 4 GB RAM, 20 GB disk4 vCPU, 8 GB RAM, 100 GB disk (for large scan histories)
KubernetesK8s 1.24+, 2 vCPU, 4 GB RAM per pod, 20 GB PVC4 vCPU, 8 GB RAM, 100 GB PVC; 2 replicas
Windows InstallerWindows Server 2019, 4 vCPU, 4 GB RAM, 20 GB8 vCPU, 8 GB RAM, 100 GB on a dedicated drive
RequirementDetail
Outbound HTTPSPort 443 to api.osv.dev, services.nvd.nist.gov, api.github.com, api.first.org (EPSS)
Inbound HTTPPort 3000 (configurable). Expose via reverse proxy with TLS for production.
Storage I/OSSD or NVMe recommended for the data volume. SQLite is write-heavy during large scans.
No runtime dependenciesAll runtimes (Node.js, Python) are bundled inside the container / Windows installer. The host only needs Docker or Windows Server.

Configuration Reference

Configuration is provided via environment variables. In Docker deployments, set them in docker-compose.yml or a .env file. In Kubernetes, use Helm values or a Secret. In the Windows installer, edit C:\ProgramData\dpndncY\config.env.

Core

VariableDefaultDescription
JWT_SECRETrequiredSecret for signing session tokens. Minimum 32 random characters. Rotate this to invalidate all active sessions.
PORT3000HTTP port the server binds to
NODE_ENVproductionSet to production. Enables secure cookie flags and disables debug output.
SESSION_DURATION8hValidity period for browser session tokens (e.g. 4h, 1d)
DB_PATH/app/data/dpndncy.dbPath to the SQLite database file. Must be on a persistent volume.

Admin account

VariableDescription
ADMIN_EMAILEmail for the default admin account, created on first startup only
ADMIN_PASSWORDInitial password. Change immediately after first login via Profile → Change Password

SAST engine tuning

VariableDefaultDescription
SAST_MAX_RUNTIME_SEC300Max wall-clock seconds per SAST scan before forced timeout. Increase for large monorepos.
SAST_STORAGE_PATH/app/data/sastDirectory for SARIF output files

Email (SMTP)

VariableDescription
SMTP_HOSTSMTP relay hostname (e.g. smtp.office365.com, smtp.gmail.com)
SMTP_PORT587 for STARTTLS (recommended), 465 for implicit TLS, 25 for unauthenticated relay
SMTP_USERSMTP authentication username
SMTP_PASSSMTP password or app password
SMTP_FROMFrom address for notifications (e.g. dpndncy@yourcompany.com)

Integrations

VariableDescription
GITHUB_TOKENGitHub PAT with repo scope — enables repository listing and remediation PRs
GITLAB_TOKENGitLab PAT with api scope
GITLAB_URLGitLab instance URL (default: https://gitlab.com). Set for self-hosted GitLab.
SLACK_WEBHOOK_URLSlack incoming webhook URL for scan notifications
DISCORD_WEBHOOK_URLDiscord webhook URL
OIDC_ISSUEROIDC issuer URL (Okta, Azure AD, Auth0)
OIDC_CLIENT_IDOIDC client ID
OIDC_CLIENT_SECRETOIDC client secret
OIDC_CALLBACK_URLFull callback URL, e.g. https://sca.yourcompany.com/auth/oidc/callback

First-Time Setup

1

Log in with the admin account

Open the platform URL in a browser. Log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured.

2

Change the admin password

Go to Profile → Change Password. The initial password is a placeholder — change it immediately to a strong credential.

3

Connect your source code repositories

Go to Settings → Integrations and connect GitHub or GitLab. This enables repository browsing and remediation PRs/MRs.

4

Run your first scan

Go to Scans → New Scan. Select a repository or enter a local path (accessible from the container). Click Start Scan.

5

Invite team members

Go to Settings → Users → Invite User. Assign the viewer role for read-only access or admin for full access. Or configure SSO for automatic provisioning.

6

Configure CI/CD integration

Generate a Personal API Token and add it to your CI/CD pipeline secrets. Use the CI/CD examples to add security gates to your pipelines.


SCA Scanning

Software Composition Analysis (SCA) scans dependency manifests, resolves the full dependency tree, and checks each package against multiple vulnerability databases. Results are enriched with real-world exploitability signals to help teams prioritize what actually matters.

How it works

  1. dpndncY traverses the target directory for supported manifest and lock files
  2. Dependency trees are parsed — direct and transitive dependencies at exact resolved versions
  3. Package identifiers are queried against OSV, NVD, GHSA, CISA KEV, and EPSS
  4. Findings are enriched with CVSS scores, EPSS probability, KEV status, and ExploitDB references
  5. A composite risk score is computed per vulnerability combining all signals
  6. Results are stored and surfaced in the UI with remediation guidance and upgrade paths

Supported Ecosystems

EcosystemManifest files detectedLock file support
npm / Node.jspackage.jsonpackage-lock.json, yarn.lock, pnpm-lock.yaml
Pythonrequirements.txt, Pipfile, pyproject.tomlPipfile.lock, poetry.lock
Java / Mavenpom.xml
Java / Gradlebuild.gradle, build.gradle.ktsgradle.lockfile
Gogo.modgo.sum
.NET / NuGet*.csproj, packages.configpackages.lock.json
RubyGemfileGemfile.lock
PHP / Composercomposer.jsoncomposer.lock
Rust / CargoCargo.tomlCargo.lock
Lock files preferred

Lock files are used when present. They contain exact resolved versions for the entire dependency tree, resulting in more accurate CVE matching than manifest files alone.

Vulnerability Sources

SourceData provided
OSV.devOpen source vulnerability database (Google). Primary advisory source for npm, PyPI, Maven, Go, NuGet, Cargo, RubyGems.
NVDNIST National Vulnerability Database. CVSS v3.1 base scores and vector strings.
GHSAGitHub Security Advisories. Earlier disclosure, ecosystem-enriched detail.
CISA KEVCISA Known Exploited Vulnerabilities catalog. Any CVE here is actively exploited in the wild — highest priority.
EPSSExploit Prediction Scoring System (FIRST.org). 0–1 probability of exploitation in the next 30 days.
ExploitDBPublic exploit code database. Presence of working exploit code amplifies severity.

SAST Scanning

The dpndncY SAST engine performs static analysis on your source code using three parallel analyzers. No external SAST tool installation is required — all analysis runs inside the container.

AnalyzerLanguagesMethod
Taint AnalyzerJavaScript, TypeScriptIntra-function data flow tracking from user-controlled sources to dangerous sinks, with call graph resolution up to depth 5
AST AnalyzerPythonPython stdlib AST-based taint analysis, executed in an isolated subprocess
Pattern AnalyzerAll 9 languages + secrets300+ regex/AST patterns covering injection, crypto misuse, secrets, insecure APIs

Starting a SAST scan via API

POST /api/sast/scan
Authorization: Bearer <token>
Content-Type: application/json

{
  "repoPath": "/mnt/repos/myapp",
  "branch": "feature/payment-refactor",
  "baseBranch": "main",
  "deltaOnly": true
}

Scans run asynchronously. Poll the run status:

GET /api/sast/runs/:runId
# status: "pending" | "running" | "completed" | "failed"

Supported Languages

LanguageRulesAnalysis depth
JavaScript / TypeScript80+Full taint tracking with call graph, source/sink/sanitizer detection
Python55+AST taint analysis — subprocess/exec/eval/deserialization sinks
Java35+SQL injection, XXE, SSRF, insecure deserialization, path traversal
C#25+SQL injection, LDAP injection, XSS, insecure cryptography
Go25+Command injection, SSRF, path traversal, weak crypto
PHP25+SQL injection, XSS, eval injection, file inclusion, SSRF
Ruby20+SQL injection, command injection, mass assignment, SSRF
C / C++15+Buffer overflows, format strings, unsafe functions
Secrets (all files)20+API keys, tokens, private keys, connection strings

Rule Engine

Each rule defines an id, severity (CRITICAL HIGH MEDIUM LOW), confidence (HIGH for taint-confirmed, MEDIUM for pattern-matched), associated CWEs, and remediation guidance.

CWE identifiers in SAST findings are correlated with CVEs in SCA results to compute Attack Path boosts — a SAST finding in the same package as a CVE with a matching CWE scores 1.3× higher.

Suppressing findings

POST /api/sast/runs/:runId/suppress
Authorization: Bearer <token>

{
  "findingId": "uuid",
  "reason": "False positive — input validated by middleware"
}

Suppressed findings remain in the audit log but are excluded from policy evaluation and dashboard counts.

Attack Path Graph

The Attack Path Graph maps how an attacker could move from a vulnerable dependency through your codebase to a reachable entry point. It combines SCA vulnerability data with SAST code findings and import resolution to produce scored, prioritized attack chains.

Path score formula

score = depRiskScore × reachabilityWeight × sinkWeight × aiAmplification × cweBoost

  depRiskScore      → CVSS + EPSS composite (0–10)
  reachabilityWeight → 1.0 imported / 1.3 called directly
  sinkWeight        → 1.5 SQL/exec, 1.3 path/SSRF, 1.0 log
  aiAmplification   → 1.0–1.2 based on AI risk context
  cweBoost          → 1.3× when SAST CWE matches CVE CWE

Range: [0, 2.0]

API

GET /api/scans/:id/attack-graph       # Full graph (nodes + edges)
GET /api/scans/:id/attack-path/:pathId # Path detail + narrative explanation

Policy Engine

Define security policies to gate CI/CD pipelines. Policies evaluate findings against thresholds, blocked rules, and EPSS minimums. A failed policy returns a non-zero exit code that fails the build.

Policy configuration

{
  "thresholds": {
    "critical": 0,     // fail if any CRITICAL findings
    "high": 3,
    "medium": null,    // null = no limit
    "low": null
  },
  "blockedRules": [
    "JS-TAINT-SQL-001",
    "PY-EXEC-001"
  ],
  "deltaOnly": true,   // only evaluate findings in changed lines (PR gate)
  "minEpss": 0.4       // only count vulns with EPSS ≥ 0.4
}

Policy evaluation

POST /api/sast/policy/evaluate
{
  "runId": "uuid",
  "policy": { ... }
}

# Response:
{
  "passed": false,
  "violations": [
    { "rule": "critical threshold", "found": 2, "limit": 0 }
  ]
}

SBOM & Export

FormatEndpointUse case
CycloneDX 1.4 JSONGET /api/scans/:id/sbomSBOM for compliance, procurement, auditors
SARIF 2.1.0GET /api/sast/runs/:id/sarifSAST findings for GitHub Code Scanning, Azure DevOps, IDE plugins
CSVGET /api/scans/:id/export/csvSCA findings for reporting, spreadsheet analysis

Scan History & Trends

Each completed scan saves a snapshot. The trend engine compares consecutive snapshots to compute a risk delta: new findings, resolved findings, and change in composite risk score. Trend data powers the dashboard timeline chart.

GET /api/scans/:id/history   # List historical snapshots for a project
GET /api/scans/:id/trend     # Risk delta between last 2 snapshots

AI Risk Detection

dpndncY flags AI/ML-specific security risks in addition to standard vulnerabilities:

  • Model loading via insecure deserialization (pickle, unsafe torch.load)
  • LLM prompt injection surface in AI framework integrations
  • Model supply chain risks: packages that download models from unverified registries at runtime
  • Training data exposure via logging, serialization, or external API calls

Authentication

TypeLifetimeUse case
Session token8h (configurable)Browser UI. Issued on login, stored as HTTP-only cookie.
Personal API Token (PAT)1 year (configurable)CI/CD pipelines, VS Code extension, API scripts. Passed as Authorization: Bearer <token>.

Creating a PAT

Via UI: Profile → API Tokens → Create Token

POST /api/tokens
Authorization: Bearer <session-token>
{ "name": "GitHub Actions", "expiresIn": "365d" }

# Save the returned token value — shown only once

API Reference

Base URL: https://sca.yourcompany.com. All endpoints require Authorization: Bearer <token> unless noted.

Scans (SCA)

POST/api/scansStart SCA scan
{ "repoPath": "/path/or/git-url", "branch": "main", "label": "optional" }
GET/api/scansList scans (paginated)
GET/api/scans/:idScan detail + findings
GET/api/scans/:id/sbomCycloneDX SBOM export
GET/api/scans/:id/export/csvCSV export
GET/api/scans/:id/attack-graphAttack Path Graph
GET/api/scans/:id/trendRisk trend delta

SAST

POST/api/sast/scanStart SAST scan (async)
{ "repoPath": "/path", "branch": "feat/x", "baseBranch": "main", "deltaOnly": true }
GET/api/sast/runs/:idRun status + summary
GET/api/sast/runs/:id/findingsFindings list (filterable)
GET/api/sast/runs/:id/sarifSARIF 2.1.0 export
POST/api/sast/runs/:id/suppressSuppress a finding
POST/api/sast/policy/evaluateEvaluate policy gate

Packages (VS Code / quick check)

POST/api/packages/checkCheck packages by name+version
{ "packages": [{ "name": "lodash", "version": "4.17.15", "ecosystem": "npm" }] }

Tokens

POST/api/tokensCreate PAT
GET/api/tokensList tokens
DELETE/api/tokens/:idRevoke token

CI/CD Integration

Use a Personal API Token to add dpndncY security gates to your pipeline. The typical pattern: scan → poll until complete → evaluate policy → fail build on violation.

GitHub Actions

.github/workflows/security.yml
name: Security Gate

on: [push, pull_request]

jobs:
  dpndncy-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: SCA Scan
        id: sca
        run: |
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}"}')
          echo "scan_id=$(echo $RESULT | jq -r .id)" >> $GITHUB_OUTPUT
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

      - name: SAST Scan
        id: sast
        run: |
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}","baseBranch":"main","deltaOnly":true}')
          RUN_ID=$(echo $RESULT | jq -r .runId)
          # Poll until complete
          for i in $(seq 1 30); do
            STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
              -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" | jq -r .status)
            [ "$STATUS" = "completed" ] && break
            sleep 10
          done
          echo "run_id=$RUN_ID" >> $GITHUB_OUTPUT
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

      - name: Policy Gate
        run: |
          POLICY='{"thresholds":{"critical":0,"high":5},"deltaOnly":true}'
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d "{\"runId\":\"${{ steps.sast.outputs.run_id }}\",\"policy\":$POLICY}")
          echo $RESULT | jq .
          echo $RESULT | jq -e '.passed == true'
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

GitLab CI

.gitlab-ci.yml
security-scan:
  stage: test
  image: curlimages/curl:latest
  script:
    - |
      SCAN=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\"}")
      SCAN_ID=$(echo $SCAN | grep -o '"id":"[^"]*"' | cut -d'"' -f4)

      SAST=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\",\"deltaOnly\":true}")
      RUN_ID=$(echo $SAST | grep -o '"runId":"[^"]*"' | cut -d'"' -f4)

      for i in $(seq 1 30); do
        STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
          -H "Authorization: Bearer $DPNDNCY_TOKEN" | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
        [ "$STATUS" = "completed" ] && break
        sleep 10
      done

      POLICY_RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"runId\":\"$RUN_ID\",\"policy\":{\"thresholds\":{\"critical\":0},\"deltaOnly\":true}}")
      echo $POLICY_RESULT | grep -q '"passed":true' || (echo "Security policy failed" && exit 1)
  variables:
    DPNDNCY_URL: https://sca.yourcompany.com
Add DPNDNCY_TOKEN and DPNDNCY_URL as CI secrets

In GitHub: Repository → Settings → Secrets → Actions. In GitLab: Settings → CI/CD → Variables. Mark them as protected and masked.


GitHub Integration

Connect dpndncY to GitHub to browse repositories and open automated remediation pull requests for vulnerable dependencies.

Setup

  1. Create a GitHub Personal Access Token with repo scope (or a fine-grained token with read/write on contents and pull requests)
  2. Set GITHUB_TOKEN in your configuration and restart the service
  3. Verify the connection: Settings → Integrations → GitHub

Remediation PRs

From any scan result, select affected packages and click Open Remediation PR. dpndncY creates a branch, bumps the vulnerable dependency to the patched version in the manifest and lock file, and opens a PR with full CVE context in the description.

GitLab Integration

Same capabilities as GitHub: repository browsing and automated Merge Requests for vulnerability remediation.

Setup

  1. Create a GitLab Personal Access Token with api scope
  2. Set GITLAB_TOKEN (and GITLAB_URL for self-hosted instances) in configuration
  3. Restart the service

VS Code Extension

The dpndncY VS Code extension shows vulnerability data inline in your manifest files. Vulnerable packages are underlined with severity indicators — hover for CVE detail, CVSS score, and recommended fix version.

Installation

  1. Download dpndncy-security-*.vsix from your dpndncY instance: Settings → VS Code Extension
  2. In VS Code: Extensions → ⋯ → Install from VSIX…

Settings

SettingDescription
dpndncy.serverUrlURL of your dpndncY instance, e.g. https://sca.yourcompany.com
dpndncy.apiTokenPersonal API Token (generate from Profile → API Tokens)
dpndncy.minSeverityMinimum severity to show: LOW / MEDIUM / HIGH / CRITICAL
dpndncy.autoScanScan on file save (default: false)

Notifications

ChannelConfiguration
SlackSet SLACK_WEBHOOK_URL to a Slack Incoming Webhook URL. Notifications sent on scan completion and policy failure.
DiscordSet DISCORD_WEBHOOK_URL to a Discord webhook URL.
EmailConfigure SMTP settings. Emails sent for scan completion, policy failures, and new CRITICAL vulnerabilities.
Custom webhookPOST /api/webhooks — register any HTTP endpoint to receive JSON payloads for scan events. Supports HMAC request signing.

SSO / OIDC

dpndncY supports OIDC-based SSO with Okta, Azure AD, Auth0, and any OIDC-compliant identity provider. Users are provisioned automatically on first login. Role assignment is controlled via OIDC group claims.

OIDC_ISSUER=https://yourorg.okta.com/oauth2/default
OIDC_CLIENT_ID=0oa1b2c3d4e5
OIDC_CLIENT_SECRET=your-client-secret
OIDC_CALLBACK_URL=https://sca.yourcompany.com/auth/oidc/callback

When configured, a Sign in with SSO button appears on the login page. Password-based login for local accounts can be disabled from Settings → Authentication.


User Management

RoleCapabilities
AdminFull access: manage users, integrations, settings, all scans, tokens, audit log
ViewerRead-only: view scan results, findings, SBOM exports. Can call /api/packages/check. Cannot start scans or change settings.

Manage users at Settings → Users or via API:

POST /api/admin/users
Authorization: Bearer <admin-token>
{ "email": "engineer@yourcompany.com", "role": "viewer" }

API Tokens (PAT)

  • Tokens are scoped to the permissions of the creating user
  • The token value is shown once only — store it immediately in your secrets manager
  • Create separate tokens per integration (CI, VS Code, monitoring) for independent revocation
  • Revocation is instant — use Profile → API Tokens or DELETE /api/tokens/:id
  • Audit token usage from Settings → Audit Log

Backup & Restore

All persistent state is in the SQLite database and the scan snapshot directory. Both live on the data volume.

Docker backup

# Backup the data volume to a tar archive
docker run --rm \
  -v dpndncy_data:/data \
  -v $(pwd)/backups:/backups \
  alpine tar czf /backups/dpndncy-$(date +%Y%m%d).tar.gz /data

# Restore
docker run --rm \
  -v dpndncy_data:/data \
  -v $(pwd)/backups:/backups \
  alpine tar xzf /backups/dpndncy-20260309.tar.gz -C /

Kubernetes backup

Backup the PVC using your cluster's volume snapshot mechanism (e.g., Velero, CSI snapshots):

velero backup create dpndncy-backup \
  --include-namespaces security \
  --wait

Windows backup

Stop the service, copy C:\ProgramData\dpndncY\ to a backup location, then restart:

net stop dpndncy
robocopy "C:\ProgramData\dpndncY" "D:\Backups\dpndncy-%date%" /E /COPYALL
net start dpndncy

Upgrades

dpndncY applies database migrations automatically on startup. Always back up your data before upgrading.

Docker

# Pull the new image and recreate the container (data volume is preserved)
docker compose pull
docker compose up -d

Kubernetes

helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  -f values-prod.yaml

Windows

Run the new version's .exe installer over the existing installation. The installer handles the service stop/start and data migration automatically.

Before upgrading

Read the release notes for the new version. Major versions may include breaking API or configuration changes. Back up your data volume before running any upgrade.

Troubleshooting

Container won't start

  • Check that JWT_SECRET is set and non-empty
  • Verify the data volume is mounted and writable by the container process
  • Check logs: docker compose logs dpndncy

Scans return no findings

  • Verify the target path is mounted into the container and accessible
  • Check that a supported manifest file exists in the target directory
  • Ensure outbound HTTPS to api.osv.dev and services.nvd.nist.gov is allowed by your firewall/proxy

SAST scan times out

  • Increase SAST_MAX_RUNTIME_SEC (e.g. 600 for large monorepos)
  • Use deltaOnly: true to limit analysis to changed files
  • Ensure the container has sufficient CPU — SAST is CPU-bound

SSO / OIDC login fails

  • Verify OIDC_CALLBACK_URL matches exactly what's registered in your IdP (including trailing slash if any)
  • Check that the dpndncY instance is reachable at the callback URL from the browser, not just from the server
  • Review the server log for the OIDC error response detail

Windows Service won't start

  • Check the Windows Event Viewer: Application → dpndncY
  • Verify the service account has read/write access to C:\ProgramData\dpndncY
  • Check that the configured port is not in use by another process: netstat -ano | findstr :3000

Viewing logs

# Docker
docker compose logs -f dpndncy

# Kubernetes
kubectl logs -n security -l app=dpndncy -f

# Windows
Get-EventLog -LogName Application -Source dpndncY -Newest 100
Enterprise support

Licensed customers have access to priority support. Contact support@dpndncy.dev with your instance ID (visible in Settings → About) and log output.