dpndncY
dpndncY
Documentation
Features Platform How It Works Integrations Why dpndncY? Product Tour Engine Demo Download Docs Partners Get Early Access
dpndncY Enterprise Docs

Software supply chain security,
built for enterprise teams

dpndncY is an enterprise-grade, self-hosted software supply chain security platform. Its lead product is the Dependency Firewall — pre-install enforcement that blocks risky packages before they enter node_modules, site-packages, or your local Maven repository. Each block / allow / bypass carries a signed JWS attestation with EPSS, CISA KEV, ExploitDB, reachability, attack-path, and trust-delta evidence — verifiable offline with the public key.

The same multi-signal stack also powers full software composition analysis (SCA), static application security testing (SAST), container image scanning, IaC scanning, and secrets detection — all within your own infrastructure perimeter.

Deployable as a Docker container, a Kubernetes workload via Helm, or a Windows Server installer — no developer toolchain required on the host.

🐳
Docker Compose
One YAML file, one command. Recommended for teams without Kubernetes.
☸️
Kubernetes + Helm
Production-grade deployment with auto-scaling, persistent volumes, and ingress.
🪟
Windows Installer
MSI/EXE setup wizard for Windows Server 2019+. Installs as a Windows Service.
🔍
SCA + SAST
9 package ecosystems, 9 languages, 300+ SAST rules, AI risk context.
🕸️
Attack Paths
Graph-based exploit path analysis from vulnerable dependency to entry point.
⚙️
REST API
Full API for CI/CD pipelines, IDE plugins, and custom integrations.

Architecture

dpndncY runs as a single containerized process backed by an embedded SQLite database. There are no external service dependencies — no Redis, no Postgres, no message queue — making deployment simple and operations lightweight.

ComponentDescription
HTTP API ServerExpress-based REST API. All scan orchestration, authentication, and data access.
Embedded DatabaseSQLite via better-sqlite3. Schema auto-migrates on startup. Persisted via volume mount.
SCA EngineDependency manifest parser + multi-source vulnerability enrichment (OSV, NVD, GHSA, CISA KEV, EPSS).
SAST EngineProprietary 300+ rule engine: JS/TS taint analysis, Python AST analysis, multi-language pattern scanner. All run in-process.
Attack Path EngineGraph builder, path finder, CWE-to-CVE correlation, exploitability scorer.
Web FrontendSingle-page application served as static files from the container.
Data sovereignty

During scans, dpndncY queries external vulnerability databases (OSV.dev, NVD, GHSA) over HTTPS. Only package names, versions, and hashes are transmitted — never source code. All scan results, findings, and metadata remain exclusively inside your environment.

Quick Start

The fastest path to a running instance is Docker Compose. Copy the snippet below, fill in three environment values, and you're up.

docker-compose.yml
version: "3.8"
services:
  dpndncy:
    image: dpndncy/platform:2.9.0
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      JWT_SECRET: "change-to-a-long-random-string"
      ADMIN_EMAIL: "admin@yourcompany.com"
      ADMIN_PASSWORD: "change-me-on-first-login"
    volumes:
      - dpndncy_data:/app/data

volumes:
  dpndncy_data:
docker compose up -d
# Open http://dpndncy.com — log in with ADMIN_EMAIL / ADMIN_PASSWORD

See Docker deployment for production-hardened configuration, or Kubernetes + Helm for enterprise-scale deployments.


Docker Deployment

Docker Compose is the recommended deployment method for teams that want a production-ready instance without Kubernetes overhead. Requires Docker Engine 20.10+ and Docker Compose v2.

Production docker-compose.yml

docker-compose.yml
version: "3.8"

services:
  dpndncy:
    image: dpndncy/platform:2.9.0
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"    # Bind to localhost; expose via reverse proxy
    environment:
      NODE_ENV: production
      JWT_SECRET: "${JWT_SECRET}"
      ADMIN_EMAIL: "${ADMIN_EMAIL}"
      ADMIN_PASSWORD: "${ADMIN_PASSWORD}"
      SESSION_DURATION: "8h"
      # GitHub integration (optional)
      GITHUB_TOKEN: "${GITHUB_TOKEN}"
      # Email notifications (optional)
      SMTP_HOST: "${SMTP_HOST}"
      SMTP_PORT: "587"
      SMTP_USER: "${SMTP_USER}"
      SMTP_PASS: "${SMTP_PASS}"
    volumes:
      - dpndncy_data:/app/data
      - dpndncy_scans:/app/data/scans    # Scan history & snapshots
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://dpndncy.com/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "5"

volumes:
  dpndncy_data:
  dpndncy_scans:

Store secrets in a .env file alongside the compose file (never commit it to source control):

.env (not committed to git)
JWT_SECRET=your-very-long-random-secret-64-chars-minimum
ADMIN_EMAIL=admin@yourcompany.com
ADMIN_PASSWORD=initial-password-change-on-first-login
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_USER=dpndncy-alerts@yourcompany.com
SMTP_PASS=your-smtp-password

Reverse proxy with nginx

In production, run dpndncY behind nginx or another reverse proxy for TLS termination:

/etc/nginx/sites-available/dpndncy
server {
    listen 443 ssl http2;
    server_name sca.yourcompany.com;

    ssl_certificate     /etc/ssl/certs/sca.yourcompany.com.crt;
    ssl_certificate_key /etc/ssl/private/sca.yourcompany.com.key;

    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 120s;
        client_max_body_size 50m;
    }
}

server {
    listen 80;
    server_name sca.yourcompany.com;
    return 301 https://$host$request_uri;
}

Starting and managing the service

# Start in background
docker compose up -d

# View logs
docker compose logs -f dpndncy

# Restart after config change
docker compose restart dpndncy

# Stop
docker compose down

# Update to a new version (data persists in named volumes)
docker compose pull
docker compose up -d

Kubernetes + Helm

The dpndncY Helm chart deploys the platform as a Kubernetes Deployment with a PersistentVolumeClaim for data, a Service, and an optional Ingress resource. Requires Kubernetes 1.24+ and Helm 3.x.

Add the Helm repository

helm repo add dpndncy https://charts.dpndncy.dev
helm repo update

Install with minimal configuration

helm install dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  --create-namespace \
  --set auth.jwtSecret="your-secret-here" \
  --set admin.email="admin@yourcompany.com" \
  --set admin.password="initial-password"

Production values file

values-prod.yaml
replicaCount: 2

image:
  repository: dpndncy/platform
  tag: "2.9.0"
  pullPolicy: IfNotPresent

auth:
  jwtSecret: ""           # Set via --set or secretRef
  sessionDuration: "8h"

admin:
  email: "admin@yourcompany.com"
  password: ""           # Set via --set or secretRef

persistence:
  enabled: true
  storageClass: "standard"
  size: 20Gi

service:
  type: ClusterIP
  port: 3000

ingress:
  enabled: true
  className: "nginx"
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  hosts:
    - host: sca.yourcompany.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: dpndncy-tls
      hosts:
        - sca.yourcompany.com

resources:
  requests:
    memory: "512Mi"
    cpu: "250m"
  limits:
    memory: "2Gi"
    cpu: "1000m"

github:
  token: ""            # GITHUB_TOKEN — set via secretRef

smtp:
  host: "smtp.yourcompany.com"
  port: 587
  user: ""
  pass: ""

sso:
  oidcIssuer: ""
  clientId: ""
  clientSecret: ""
helm install dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  --create-namespace \
  -f values-prod.yaml \
  --set auth.jwtSecret="$(openssl rand -hex 32)" \
  --set admin.password="your-initial-password"

Secrets management

For production, store sensitive values as Kubernetes Secrets and reference them via the chart's existingSecret option rather than passing them as Helm values:

kubectl create secret generic dpndncy-secrets \
  --namespace security \
  --from-literal=jwtSecret="$(openssl rand -hex 32)" \
  --from-literal=adminPassword="your-password" \
  --from-literal=githubToken="ghp_xxxx"
# In values-prod.yaml:
existingSecret: dpndncy-secrets

Upgrade

helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  -f values-prod.yaml \
  --set auth.jwtSecret="existing-secret"
Persistent data on upgrade

The PVC is not deleted on helm upgrade or helm uninstall by default. Your scan history and database are retained across version upgrades.

Windows Installer

dpndncY is available as a signed Windows installer (.exe) for deployment on Windows Server 2019 or later. The installer handles all dependencies and installs dpndncY as a Windows Service that starts automatically with the OS.

Prerequisites

  • Windows Server 2019, 2022, or Windows 10/11 (64-bit)
  • 4 GB RAM minimum; 8 GB recommended
  • Administrator privileges for installation
  • Outbound HTTPS (port 443) to OSV.dev, NVD, and GHSA for vulnerability data
  • No Node.js, Python, or other runtime required — all bundled in the installer

Installation steps

1

Download the installer

Download dpndncY-Setup-2.9.0-x64.exe from your licensed download portal or request it via License Request.

2

Run as Administrator

Right-click the installer → Run as administrator. The setup wizard will launch.

3

Configure the installation

The wizard will ask for:

  • Install directory (default: C:\Program Files\dpndncY)
  • Data directory (default: C:\ProgramData\dpndncY) — keep on a drive with sufficient space
  • HTTP port (default: 3000)
  • Admin email and initial password
  • JWT secret — auto-generated if left blank
  • Service account — by default runs as NT AUTHORITY\NetworkService
4

Complete installation

Click Install. The wizard will install all bundled runtimes, configure the service, and open the firewall rule for the selected port.

5

Access the platform

Open http://dpndncy.com (or the configured port) in a browser. Log in with the admin credentials you set during installation.

Managing the Windows Service

# View service status
sc query dpndncy

# Stop / Start / Restart
net stop dpndncy
net start dpndncy

# Or via Services MMC (services.msc) — look for "dpndncY Platform"

Post-install configuration

After installation, edit the configuration file at C:\ProgramData\dpndncY\config.env to add integration credentials (GitHub token, SMTP settings, OIDC, etc.), then restart the service.

# C:\ProgramData\dpndncY\config.env
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
SMTP_HOST=smtp.yourcompany.com
SMTP_PORT=587
SMTP_USER=dpndncy@yourcompany.com
SMTP_PASS=your-smtp-password
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx

Uninstallation

Use Control Panel → Programs → Uninstall a program → dpndncY. The uninstaller removes the service and binaries. The data directory (C:\ProgramData\dpndncY) is preserved — delete it manually if you want to remove all data.

Upgrade

Run the new version's installer over the existing installation. The installer detects the previous version, stops the service, updates the binaries, and restarts — your data and configuration are preserved.

System Requirements

DeploymentMinimumRecommended (production)
DockerDocker Engine 20.10, 2 vCPU, 4 GB RAM, 20 GB disk4 vCPU, 8 GB RAM, 100 GB disk (for large scan histories)
KubernetesK8s 1.24+, 2 vCPU, 4 GB RAM per pod, 20 GB PVC4 vCPU, 8 GB RAM, 100 GB PVC; 2 replicas
Windows InstallerWindows Server 2019, 4 vCPU, 4 GB RAM, 20 GB8 vCPU, 8 GB RAM, 100 GB on a dedicated drive
RequirementDetail
Outbound HTTPSPort 443 to api.osv.dev, services.nvd.nist.gov, api.github.com, api.first.org (EPSS)
Inbound HTTPPort 3000 (configurable). Expose via reverse proxy with TLS for production.
Storage I/OSSD or NVMe recommended for the data volume. SQLite is write-heavy during large scans.
No runtime dependenciesAll runtimes (Node.js, Python) are bundled inside the container / Windows installer. The host only needs Docker or Windows Server.

Configuration Reference

Configuration is provided via environment variables. In Docker deployments, set them in docker-compose.yml or a .env file. In Kubernetes, use Helm values or a Secret. In the Windows installer, edit C:\ProgramData\dpndncY\config.env.

Core

VariableDefaultDescription
JWT_SECRETrequiredSecret for signing session tokens. Minimum 32 random characters. Rotate this to invalidate all active sessions.
PORT3000HTTP port the server binds to
NODE_ENVproductionSet to production. Enables secure cookie flags and disables debug output.
SESSION_DURATION8hValidity period for browser session tokens (e.g. 4h, 1d)
DB_PATH/app/data/dpndncy.dbPath to the SQLite database file. Must be on a persistent volume.

Admin account

VariableDescription
ADMIN_EMAILEmail for the default admin account, created on first startup only
ADMIN_PASSWORDInitial password. Change immediately after first login via Profile → Change Password

SAST engine tuning

VariableDefaultDescription
SAST_MAX_RUNTIME_SEC300Max wall-clock seconds per SAST scan before forced timeout. Increase for large monorepos.
SAST_STORAGE_PATH/app/data/sastDirectory for SARIF output files

Email (SMTP)

VariableDescription
SMTP_HOSTSMTP relay hostname (e.g. smtp.office365.com, smtp.gmail.com)
SMTP_PORT587 for STARTTLS (recommended), 465 for implicit TLS, 25 for unauthenticated relay
SMTP_USERSMTP authentication username
SMTP_PASSSMTP password or app password
SMTP_FROMFrom address for notifications (e.g. dpndncy@yourcompany.com)

Integrations

VariableDescription
GITHUB_TOKENGitHub PAT with repo scope — enables repository listing and remediation PRs
GITLAB_TOKENGitLab PAT with api scope
GITLAB_URLGitLab instance URL (default: https://gitlab.com). Set for self-hosted GitLab.
SLACK_WEBHOOK_URLSlack incoming webhook URL for scan notifications
DISCORD_WEBHOOK_URLDiscord webhook URL
OIDC_ISSUEROIDC issuer URL (Okta, Azure AD, Auth0)
OIDC_CLIENT_IDOIDC client ID
OIDC_CLIENT_SECRETOIDC client secret
OIDC_CALLBACK_URLFull callback URL, e.g. https://sca.yourcompany.com/auth/oidc/callback

First-Time Setup

1

Log in with the admin account

Open the platform URL in a browser. Log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured.

2

Change the admin password

Go to Profile → Change Password. The initial password is a placeholder — change it immediately to a strong credential.

3

Connect your source code repositories

Go to Settings → Integrations and connect GitHub or GitLab. This enables repository browsing and remediation PRs/MRs.

4

Run your first scan

Go to Scans → New Scan. Select a repository or enter a local path (accessible from the container). Click Start Scan.

5

Invite team members

Go to Settings → Users → Invite User. Assign the viewer role for read-only access or admin for full access. Or configure SSO for automatic provisioning.

6

Configure CI/CD integration

Generate a Personal API Token and add it to your CI/CD pipeline secrets. Use the CI/CD examples to add security gates to your pipelines.


Dependency Firewall

The Dependency Firewall is dpndncY's pre-install enforcement layer. It evaluates every {ecosystem, name, version} request against vulnerability data, exploitability signals, trust score, license policy, and tenant-specific rules — and returns an allow / block / review decision before the package ever lands in node_modules, site-packages, or your local Maven repository.

Why pre-install

Post-scan SCA tools (Snyk, Black Duck, Dependabot) tell you what's wrong after a vulnerable package is already in your tree. The firewall stops the install in the first place. The same multi-signal exploitability stack that powers dpndncY's prioritization — CISA KEV, EPSS, ExploitDB, JS/TS reachability, attack-path graphs, trust score, license obligations — is applied at install time, not after.

Decision evidence (signed)

Every decision the firewall makes carries a JSON Web Signature (JWS) attestation containing:

  • The decision and rationale (Patch Now / Patch This Sprint / Monitor / Accept Risk / Block / Allow)
  • Each signal value with its source: EPSS score and snapshot URL, CISA KEV catalog version, ExploitDB entry IDs, reachability proofs (file:line of vulnerable function calls), CVSS vector and score
  • The trust score and trust delta vs. the previously approved version
  • The policy ID and version that was applied
  • Scan ID, project ID, tenant ID, and timestamp

Attestations are signed with the dpndncY licensing keypair. Any party with the public key can verify the bundle offline — auditors, CI pipelines storing build artifacts, downstream customers proving supply-chain hygiene.

Modes

  • Enforce — block requests that violate policy. Bypass requires a signed waiver.
  • Soak / monitor-only — log every decision without blocking, for a configurable rollout phase.
  • Review — route ambiguous decisions to a human approver with full evidence attached.

Trust-delta gating

Beyond absolute thresholds, the firewall flags any package whose trust score has dropped relative to the previously approved version — catching typosquats, package takeovers, and maintainer rotations that absolute thresholds miss.

Bypass and audit

Bypassing the firewall always requires one of: a human approver, a policy waiver with an expiry date, or an emergency token. Every bypass attempt is itself audited and signed — even bypassed installs leave an evidence trail.

Engine

The firewall engine ships in src/firewall/: evaluator.js (decision logic), packageRequest.js (request normalization), policy.js (policy evaluation), safeVersions.js (allowed-version resolution). The registry-proxy layer for transparent enforcement at the package-manager level (npm, PyPI, Maven Central, NuGet, RubyGems, Crates.io, proxy.golang.org) is in active build-out.


SCA Scanning

Software Composition Analysis (SCA) scans dependency manifests, resolves the full dependency tree, and checks each package against multiple vulnerability databases. Results are enriched with real-world exploitability signals to help teams prioritize what actually matters.

How it works

  1. dpndncY traverses the target directory for supported manifest and lock files
  2. Dependency trees are parsed — direct and transitive dependencies at exact resolved versions
  3. Package identifiers are queried against OSV, NVD, GHSA, CISA KEV, and EPSS
  4. Findings are enriched with CVSS scores, EPSS probability, KEV status, and ExploitDB references
  5. A composite risk score is computed per vulnerability combining all signals
  6. Results are stored and surfaced in the UI with remediation guidance and upgrade paths

Supported Ecosystems (17)

17 dedicated ecosystem scanners plus a generic fallback. Both the Dependency Firewall and post-scan SCA share the same parser stack — one source of truth.

EcosystemManifest files detectedLock file support
npm / Node.jspackage.jsonpackage-lock.json, yarn.lock, pnpm-lock.yaml
Pythonrequirements.txt, Pipfile, pyproject.tomlPipfile.lock, poetry.lock
Java / Mavenpom.xml
Java / Gradlebuild.gradle, build.gradle.ktsgradle.lockfile
Gogo.modgo.sum
.NET / NuGet*.csproj, packages.configpackages.lock.json
RubyGemfileGemfile.lock
PHP / Composercomposer.jsoncomposer.lock
Rust / CargoCargo.tomlCargo.lock
C / C++conanfile.txt, vcpkg.json, CMakeLists.txt
Perl / CPANcpanfile, META.json, Makefile.PL
R / CRANDESCRIPTION, renv.lockrenv.lock
Dart / Pubpubspec.yamlpubspec.lock
Elixir / Hexmix.exsmix.lock
OCaml / OPAM*.opam, opam.lockedopam.locked
Swift / SPMPackage.swiftPackage.resolved
Generic fallbackMulti-language reachability scanner for any ecosystem without a dedicated parser
Lock files preferred

Lock files are used when present. They contain exact resolved versions for the entire dependency tree, resulting in more accurate CVE matching than manifest files alone.

Vulnerability Sources

SourceData provided
OSV.devOpen source vulnerability database (Google). Primary advisory source for npm, PyPI, Maven, Go, NuGet, Cargo, RubyGems.
NVDNIST National Vulnerability Database. CVSS v3.1 base scores and vector strings.
GHSAGitHub Security Advisories. Earlier disclosure, ecosystem-enriched detail.
CISA KEVCISA Known Exploited Vulnerabilities catalog. Any CVE here is actively exploited in the wild — highest priority.
EPSSExploit Prediction Scoring System (FIRST.org). 0–1 probability of exploitation in the next 30 days.
ExploitDBPublic exploit code database. Presence of working exploit code amplifies severity.

SAST Scanning

The dpndncY SAST engine performs static analysis on your source code using three parallel analyzers. 404 rules across 13+ languages. No external SAST tool installation is required — all analysis runs inside the container.

AnalyzerLanguagesMethod
Taint AnalyzerJavaScript, TypeScript, PythonIntra-function data flow tracking from user-controlled sources to dangerous sinks, with call graph resolution up to depth 5. GraphQL resolver args.* and tRPC input.* as taint sources; Sequelize/Knex raw SQL methods, Fastify reply.send(), email transporter sinks all modeled.
Lang-specific AST AnalyzerJava, Kotlin, Go, C#, PHP, Ruby, Scala, Swift, Dart, Apex, VB.NET, Objective-C, C/C++Per-language AST analyzers detect SQL injection, XXE, SSRF, path traversal, deserialization, mass assignment, and language-idiomatic anti-patterns. Java analyzer covers Spring framework controllers; Kotlin analyzer adds JDBC string templates and Spring Boot cross-file analysis.
Pattern AnalyzerAll 13+ languages + IaC + secrets404 regex/AST patterns covering injection, crypto misuse, secrets, insecure APIs, framework misconfiguration. Inline suppression supported via // dpndncy-ignore, // nosec, # noqa, // NOSONAR, // lgtm.

Starting a SAST scan via API

POST /api/sast/scan
Authorization: Bearer <token>
Content-Type: application/json

{
  "repoPath": "/mnt/repos/myapp",
  "branch": "feature/payment-refactor",
  "baseBranch": "main",
  "deltaOnly": true
}

Scans run asynchronously. Poll the run status:

GET /api/sast/runs/:runId
# status: "pending" | "running" | "completed" | "failed"

Supported Languages

LanguageRulesAnalysis depth
JavaScript / TypeScript80+Full taint tracking with call graph; GraphQL/tRPC sources; Sequelize/Knex/Fastify sinks
Python55+AST taint analysis — subprocess/exec/eval/deserialization/SMTP/redirect sinks; XXE for lxml/ElementTree
Java34+SQL injection, XXE, SSRF, insecure deserialization, path traversal, mass assignment, CSRF disable
Kotlin52All 34 Java rules plus 6 Kotlin-specific (JDBC string templates, ProcessBuilder, File path, URL SSRF, ObjectInputStream, hardcoded creds). Spring Boot cross-file analysis.
C#25+SQL injection, LDAP injection, XSS, insecure cryptography, mass assignment binding
Go25+Command injection, SSRF, path traversal, weak crypto, missing HTTP timeouts, XPath injection
PHP25+SQL injection, XSS, eval injection, file inclusion, SSRF, XPath, missing CSRF token
Ruby20+SQL injection, command injection, mass assignment, SSRF, XXE via Nokogiri, XPath, skip_before_action
C / C++30+Buffer overflows, format strings, unsafe functions (gets/scanf/strcpy), TOCTOU, MD5/SHA1, double-free, UAF
Scala / Swift / Dart / Apex / VB.NET / Objective-Ctier-4 build-awareBuild-context-activated framework model packs (Play WS, Alamofire, Dio, AFNetworking, RestSharp). Source/sink propagation across helpers.
IaC30+Terraform, CloudFormation (JSON+YAML), Kubernetes manifests — CWE-269 privesc, CWE-22 path traversal, capability misconfigurations, exposed ports, weak secrets
Secrets (all files)731 rulesAWS, GCP, Azure, GitHub/GitLab tokens, OpenAI/Anthropic keys, private keys, JWTs, DB connection strings, and more

Rule Engine

Each rule defines an id, severity (CRITICAL HIGH MEDIUM LOW), confidence (HIGH for taint-confirmed, MEDIUM for pattern-matched), associated CWEs, and remediation guidance.

CWE identifiers in SAST findings are correlated with CVEs in SCA results to compute Attack Path boosts — a SAST finding in the same package as a CVE with a matching CWE scores 1.3× higher.

Suppressing findings

POST /api/sast/runs/:runId/suppress
Authorization: Bearer <token>

{
  "findingId": "uuid",
  "reason": "False positive — input validated by middleware"
}

Suppressed findings remain in the audit log but are excluded from policy evaluation and dashboard counts.

Attack Path Graph

The Attack Path Graph maps how an attacker could move from a vulnerable dependency through your codebase to a reachable entry point. It combines SCA vulnerability data with SAST code findings and import resolution to produce scored, prioritized attack chains.

Path score formula

score = depRiskScore × reachabilityWeight × sinkWeight × aiAmplification × cweBoost

  depRiskScore      → CVSS + EPSS composite (0–10)
  reachabilityWeight → 1.0 imported / 1.3 called directly
  sinkWeight        → 1.5 SQL/exec, 1.3 path/SSRF, 1.0 log
  aiAmplification   → 1.0–1.2 based on AI risk context
  cweBoost          → 1.3× when SAST CWE matches CVE CWE

Range: [0, 2.0]

API

GET /api/scans/:id/attack-graph       # Full graph (nodes + edges)
GET /api/scans/:id/attack-path/:pathId # Path detail + narrative explanation

Container Image Scanning

dpndncY parses Docker-save tarballs and extracts the dependency manifest from each image layer across 9 ecosystems. Live registry pull (Docker Hub, ECR, GCR, Quay) is in active build-out.

Supported per-layer ecosystems

  • OS packages: Debian (/var/lib/dpkg/status), Alpine (/lib/apk/db/installed), RPM (/var/lib/rpm/Packages)
  • Application packages per layer: npm, PyPI, Go modules, RubyGems, PHP Composer, Rust Cargo, .NET
  • Dockerfile linting: when present in the tarball
  • Secrets scanning: optional secret scan across layer files using the same 731-rule scanner

API

POST /api/scan/container
Authorization: Bearer <token>
Content-Type: multipart/form-data

# Body:
#   image=<docker-save-tarball.tar>
#   imageRef=docker.io/library/nginx:1.27 (optional metadata)

Infrastructure-as-Code (IaC) Scanning

IaC scanning runs as part of the SAST workflow. Detects security misconfigurations across Terraform, CloudFormation (JSON + YAML), and Kubernetes manifests.

FormatDetectionExample checks
Terraform.tf filesPublic S3 buckets, open security groups, unencrypted RDS, missing CloudTrail, hard-coded credentials
CloudFormation.yaml, .yml, .json with AWSTemplateFormatVersionSame controls as Terraform plus stack-specific checks. JSON support added in v2.8.
Kubernetes.yaml/.yml with apiVersion + kindPrivilege escalation (CWE-269), path traversal (CWE-22), privileged: true, missing securityContext, host-network/host-PID, default service accounts
DockerfileWithin tarballsBest-practice lints alongside container scan

Non-IaC JSON (package.json, tsconfig.json, etc.) is excluded from CloudFormation rule matching.


Secrets Detection

731-rule scanner runs alongside SCA and SAST. Detects credentials and tokens across all source files and configuration formats.

  • Cloud provider keys: AWS access keys, GCP service account JSON, Azure connection strings
  • SCM tokens: GitHub PAT (github_pat_*, ghp_*), GitLab tokens, Bitbucket app passwords
  • API keys: OpenAI (sk-*), Anthropic (sk-ant-*), Stripe, SendGrid, Mailgun, Twilio
  • Cryptographic material: PEM private keys (RSA / EC / OpenSSH / PGP), JWT bearer tokens
  • Database connection strings: PostgreSQL, MySQL, MongoDB, Redis URIs with embedded credentials
  • Inline suppression: same comment markers as SAST

Decision Engine & Signed Evidence

Every vulnerability gets a prioritized decision and SLA. Every decision — firewall block, Patch-Now triage, Accept-Risk — produces a signed JWS attestation.

Decision matrix

DecisionSLATriggers
Patch Now48hCISA KEV listed; OR EPSS ≥ 0.85; OR EPSS ≥ 0.7 with reachable code path
Patch This Sprint336h (14d)EPSS ≥ 0.3; OR Critical severity with fix available; OR public ExploitDB entry; OR reachable Critical/High
Monitor720h (30d)EPSS ≥ 0.05; OR High severity; OR reachable Medium
Accept RiskNo active exploitation signals

Signed JWS attestation

Each decision produces a JSON Web Signature bundle containing:

{
  "decision": "Patch Now",
  "urgency": "critical",
  "slaHours": 48,
  "rationale": [
    "Listed in CISA Known Exploited Vulnerabilities catalog",
    "EPSS score 0.912 indicates very high exploitation probability",
    "Reachable: minimist.parse() used in src/server.js, src/config.js"
  ],
  "evidence": {
    "epss": { "value": 0.912, "source": "https://api.first.org/data/v1/epss?cve=CVE-XXXX-XXXX", "fetched_at": "2026-04-28T14:30:00Z" },
    "kev":  { "catalog_version": "2026-04-27", "listed_at": "2026-03-15" },
    "exploitDb": [{ "edb_id": "51234", "url": "https://www.exploit-db.com/exploits/51234" }],
    "reachability": [{ "function": "minimist.parse", "file": "src/server.js", "line": 42 }],
    "cvss": { "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "score": 9.8 }
  },
  "policy": { "id": "policy_pci_dss_default", "version": 7 },
  "trustDelta": -28,
  "scan_id": "...", "project_id": "...", "tenant_id": "...",
  "timestamp": "2026-04-28T14:30:42Z"
}

Signed with the dpndncY licensing keypair. Verify offline with the public key — auditors, downstream customers, CI pipelines storing build artifacts.


Trust Engine & Patch Guidance

Every package gets a 0–100 trust score derived from explainable factors. The score also drives the firewall's trust-delta gating — alerts when a version's score drops vs. the last approved version (catches typosquats, takeovers, and maintainer rotations).

Factors

  • Maintainer count and historical activity
  • Release cadence (stale packages flagged; flash-flood new packages flagged)
  • Install-script presence and risk class
  • License clarity (declared vs. inferred vs. unknown)
  • Vulnerability history (count and recency)
  • Anomaly index (e.g., new package + install scripts + no maintainer history)
  • Coverage confidence (how much metadata was available to score against)

Patch guidance

For each package the trust engine emits a recommended target version with semver delta classification (patch / minor / major), the earliest non-vulnerable target, and tie-broken alternatives. This drives auto-fix PR generation.


Auto-Fix Pull Requests

Open pull requests on GitHub, GitLab, and self-hosted instances with version bumps, lockfile regeneration, and breaking-change analysis attached.

Coverage

EcosystemManifestLockfile
npmpackage.jsonpackage-lock.json, yarn.lock, pnpm-lock.yaml
PyPIrequirements.txtpoetry.lock, Pipfile.lock
Mavenpom.xml
Gogo.modgo.sum (regenerated)
CargoCargo.tomlCargo.lock
NuGetpackages.config
Composercomposer.json
RubyGemsGemfileGemfile.lock

Breaking-change analysis

Pre-flight diff between current and target version surfaces removed exports, signature changes, and major version bumps. The PR description includes the breaking-change summary so reviewers see the surface area before merging.

Platform support

  • GitHub.com and self-hosted GitHub Enterprise Server
  • GitLab.com and self-hosted GitLab CE/EE
  • Bitbucket Cloud (early)

License Compliance & Obligations

Beyond allow/deny: surfaces the actual obligations triggered by each license — what your legal team needs to do, not just what's blocked.

  • SPDX-aligned license normalization
  • Obligation graph: attribution, source disclosure, copyleft scope (file / module / derivative work), patent grant, NOTICE file requirements, modifications statement
  • Conflict detection: GPL + proprietary, AGPL + SaaS, copyleft + closed-source distribution
  • License-cache for offline / air-gapped lookups
  • Pre-install firewall enforcement: block GPL contamination before it lands in the tree

Dependency Health Scoring

Per-package health score independent of CVE status — future risk indicators, not just known vulnerabilities. Surfaces low-health packages before they get a CVE.

  • Maintainer count, activity, response time
  • Release cadence and last-release recency
  • License clarity
  • Anomaly signals (e.g., sudden ownership transfer, recent install-script addition)
  • Historical vulnerability density

Notifications

Native formatting per platform — auto-detected by webhook hostname.

PlatformFormatDetection
SlackBlock Kit with severity-coded sectionshooks.slack.com
Microsoft TeamsAdaptive Card with action buttonswebhook.office.com / outlook.office365.com
DiscordRich embed with severity colordiscord.com / discordapp.com
Generic webhookJSON payloadAnything else (PagerDuty, Opsgenie, custom endpoints)
EmailSMTP with HTML + plaintextConfigured per tenant via SMTP settings

Triggers: new findings, policy failures, firewall blocks, scan completion, license violations, trust-delta drops.


Jira & Linear Ticketing

Native API integrations for Jira (cloud and self-hosted Data Center) and Linear. Auto-create tickets from findings or firewall events; round-trip status sync back to dpndncY.

  • Per-tenant config: project key, issue type, default assignee, priority mapping
  • Bulk-create tickets from a filtered finding view
  • Ticket includes severity, evidence bundle link, remediation guidance, and the signed JWS attestation
  • Two-way sync: closing the ticket marks the finding as remediated in dpndncY

Policy Engine

Define security policies to gate CI/CD pipelines. Policies evaluate findings against thresholds, blocked rules, and EPSS minimums. A failed policy returns a non-zero exit code that fails the build.

Policy configuration

{
  "thresholds": {
    "critical": 0,     // fail if any CRITICAL findings
    "high": 3,
    "medium": null,    // null = no limit
    "low": null
  },
  "blockedRules": [
    "JS-TAINT-SQL-001",
    "PY-EXEC-001"
  ],
  "deltaOnly": true,   // only evaluate findings in changed lines (PR gate)
  "minEpss": 0.4       // only count vulns with EPSS ≥ 0.4
}

Policy evaluation

POST /api/sast/policy/evaluate
{
  "runId": "uuid",
  "policy": { ... }
}

# Response:
{
  "passed": false,
  "violations": [
    { "rule": "critical threshold", "found": 2, "limit": 0 }
  ]
}

SBOM & Export

FormatEndpointUse case
CycloneDX 1.4 JSONGET /api/scans/:id/sbomSBOM for compliance, procurement, auditors
SARIF 2.1.0GET /api/sast/runs/:id/sarifSAST findings for GitHub Code Scanning, Azure DevOps, IDE plugins
CSVGET /api/scans/:id/export/csvSCA findings for reporting, spreadsheet analysis

Scan History & Trends

Each completed scan saves a snapshot. The trend engine compares consecutive snapshots to compute a risk delta: new findings, resolved findings, and change in composite risk score. Trend data powers the dashboard timeline chart.

GET /api/scans/:id/history   # List historical snapshots for a project
GET /api/scans/:id/trend     # Risk delta between last 2 snapshots

AI Risk Detection

dpndncY flags AI/ML-specific security risks in addition to standard vulnerabilities:

  • Model loading via insecure deserialization (pickle, unsafe torch.load)
  • LLM prompt injection surface in AI framework integrations
  • Model supply chain risks: packages that download models from unverified registries at runtime
  • Training data exposure via logging, serialization, or external API calls

Authentication

TypeLifetimeUse case
Session token8h (configurable)Browser UI. Issued on login, stored as HTTP-only cookie.
Personal API Token (PAT)1 year (configurable)CI/CD pipelines, VS Code extension, API scripts. Passed as Authorization: Bearer <token>.

Creating a PAT

Via UI: Profile → API Tokens → Create Token

POST /api/tokens
Authorization: Bearer <session-token>
{ "name": "GitHub Actions", "expiresIn": "365d" }

# Save the returned token value — shown only once

API Reference

Base URL: https://sca.yourcompany.com. All endpoints require Authorization: Bearer <token> unless noted.

Scans (SCA)

POST/api/scansStart SCA scan
{ "repoPath": "/path/or/git-url", "branch": "main", "label": "optional" }
GET/api/scansList scans (paginated)
GET/api/scans/:idScan detail + findings
GET/api/scans/:id/sbomCycloneDX SBOM export
GET/api/scans/:id/export/csvCSV export
GET/api/scans/:id/attack-graphAttack Path Graph
GET/api/scans/:id/trendRisk trend delta

SAST

POST/api/sast/scanStart SAST scan (async)
{ "repoPath": "/path", "branch": "feat/x", "baseBranch": "main", "deltaOnly": true }
GET/api/sast/runs/:idRun status + summary
GET/api/sast/runs/:id/findingsFindings list (filterable)
GET/api/sast/runs/:id/sarifSARIF 2.1.0 export
POST/api/sast/runs/:id/suppressSuppress a finding
POST/api/sast/policy/evaluateEvaluate policy gate

Packages (VS Code / quick check)

POST/api/packages/checkCheck packages by name+version
{ "packages": [{ "name": "lodash", "version": "4.17.15", "ecosystem": "npm" }] }

Tokens

POST/api/tokensCreate PAT
GET/api/tokensList tokens
DELETE/api/tokens/:idRevoke token

CI/CD Integration

Use a Personal API Token to add dpndncY security gates to your pipeline. The typical pattern: scan → poll until complete → evaluate policy → fail build on violation.

GitHub Actions

.github/workflows/security.yml
name: Security Gate

on: [push, pull_request]

jobs:
  dpndncy-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: SCA Scan
        id: sca
        run: |
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}"}')
          echo "scan_id=$(echo $RESULT | jq -r .id)" >> $GITHUB_OUTPUT
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

      - name: SAST Scan
        id: sast
        run: |
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d '{"repoPath":"${{ github.workspace }}","branch":"${{ github.ref_name }}","baseBranch":"main","deltaOnly":true}')
          RUN_ID=$(echo $RESULT | jq -r .runId)
          # Poll until complete
          for i in $(seq 1 30); do
            STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
              -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" | jq -r .status)
            [ "$STATUS" = "completed" ] && break
            sleep 10
          done
          echo "run_id=$RUN_ID" >> $GITHUB_OUTPUT
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

      - name: Policy Gate
        run: |
          POLICY='{"thresholds":{"critical":0,"high":5},"deltaOnly":true}'
          RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
            -H "Authorization: Bearer ${{ secrets.DPNDNCY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d "{\"runId\":\"${{ steps.sast.outputs.run_id }}\",\"policy\":$POLICY}")
          echo $RESULT | jq .
          echo $RESULT | jq -e '.passed == true'
        env:
          DPNDNCY_URL: ${{ secrets.DPNDNCY_URL }}

GitLab CI

.gitlab-ci.yml
security-scan:
  stage: test
  image: curlimages/curl:latest
  script:
    - |
      SCAN=$(curl -sf -X POST $DPNDNCY_URL/api/scans \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\"}")
      SCAN_ID=$(echo $SCAN | grep -o '"id":"[^"]*"' | cut -d'"' -f4)

      SAST=$(curl -sf -X POST $DPNDNCY_URL/api/sast/scan \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"repoPath\":\"$CI_PROJECT_DIR\",\"branch\":\"$CI_COMMIT_REF_NAME\",\"deltaOnly\":true}")
      RUN_ID=$(echo $SAST | grep -o '"runId":"[^"]*"' | cut -d'"' -f4)

      for i in $(seq 1 30); do
        STATUS=$(curl -sf $DPNDNCY_URL/api/sast/runs/$RUN_ID \
          -H "Authorization: Bearer $DPNDNCY_TOKEN" | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
        [ "$STATUS" = "completed" ] && break
        sleep 10
      done

      POLICY_RESULT=$(curl -sf -X POST $DPNDNCY_URL/api/sast/policy/evaluate \
        -H "Authorization: Bearer $DPNDNCY_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"runId\":\"$RUN_ID\",\"policy\":{\"thresholds\":{\"critical\":0},\"deltaOnly\":true}}")
      echo $POLICY_RESULT | grep -q '"passed":true' || (echo "Security policy failed" && exit 1)
  variables:
    DPNDNCY_URL: https://sca.yourcompany.com
Add DPNDNCY_TOKEN and DPNDNCY_URL as CI secrets

In GitHub: Repository → Settings → Secrets → Actions. In GitLab: Settings → CI/CD → Variables. Mark them as protected and masked.


CLI Tool — Overview & Installation

The dpndncY CLI is a single standalone binary — no Node.js or runtime required on the machine running it. Download, configure once with your server URL and API token, then scan any local path, Git repo, zip archive, or container image from the terminal.

🪟
Windows
dpndncy-win.exe — runs on Windows 10/11 and Server 2019+. Works in CMD, PowerShell, and Windows Terminal.
🐧
Linux
dpndncy-linux — static x64 binary. Works on Ubuntu, Debian, RHEL, Alpine, and any 64-bit Linux distro.
🍎
macOS
dpndncy-mac — x64 binary for macOS 12+. Works in Terminal and CI agents on macOS runners.

Windows setup

# 1. Download from GitHub Releases
#    https://github.com/dpndncy/cli/releases/latest

# 2. (Optional) Add to PATH so 'dpndncy' works from any directory
#    Copy dpndncy-win.exe to C:\Windows\System32\dpndncy.exe
#    or add the folder to your PATH environment variable

# 3. Configure your server
dpndncy login --server https://sca.yourcompany.com --token dpat_your_token_here

# 4. Verify connection
dpndncy status

Linux / macOS setup

# Download (Linux example)
curl -L https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux -o dpndncy
chmod +x dpndncy
sudo mv dpndncy /usr/local/bin/

# Configure
dpndncy login --server https://sca.yourcompany.com --token dpat_your_token_here

# Verify
dpndncy status

Credentials file

dpndncy login saves your server URL and token to ~/.dpndncy/config.json so you don't need to pass them on every scan. You can override them per-command with --server and --token.

Scan Command Reference

The main command. Run a security scan against a local path, repository, archive, or container image — all engines optional, all combinable.

dpndncy scan [path] [flags]

Scan engines

FlagEngineWhat it does
--scaSCADependency vulnerability scan — OSV, NVD, GHSA, CISA KEV, EPSS. Default if no engine flag is given.
--sastSASTStatic code analysis across 9 languages with 300+ rules, taint tracking and sink detection.
--secretsSecretsIaC and config file secrets scan — API keys, tokens, passwords in dotfiles, YAML, etc.
--ai-riskAI RiskAI-assisted code attribution and context profiling. Requires --sca.
--attack-pathsAttack PathsGraph-based exploit path from vulnerable dependency to code sink. Requires --sca --sast.
--allAll enginesEnable SCA + SAST + Secrets + AI Risk.

Scan targets

Flag / ArgTarget type
dpndncy scan .Scan current working directory (local path)
dpndncy scan /path/to/projectScan a specific local directory
--zip <file>Upload a zip, jar, war, or tar archive
--repo <url>Scan a GitHub or GitLab repository by URL (cloned server-side)
--image <ref>Scan a container image — registry ref or local tarball

Output options

FlagBehaviour
--jsonPrint raw JSON results to stdout — suitable for scripting and parsing
--ciCI mode: minimal output, exit code 1 on policy fail, exit code 2 on error
--output <file>Write JSON results to a file instead of (or in addition to) stdout

Exit codes

CodeMeaning
0Scan complete — policy passed (or no policy configured)
1Policy FAIL — vulnerabilities exceeded defined thresholds
2Scan error — connection failure, bad token, or scan engine error

Examples

# SCA-only scan on current directory (default)
dpndncy scan .

# Full scan — all engines
dpndncy scan --all /path/to/project

# SCA + SAST + Secrets
dpndncy scan --sca --sast --secrets .

# Scan a zip archive
dpndncy scan --zip build/app.jar

# Scan a GitHub repo
dpndncy scan --repo https://github.com/org/repo --sca --sast

# Scan a container image from registry
dpndncy scan --image nginx:latest

# CI mode — fails build on policy violation
dpndncy scan --ci --sca . && echo "Build OK"

# Output JSON to file
dpndncy scan --sca --json --output results.json .

Login & Status Commands

dpndncy login

Save your server URL and API token. Stored in ~/.dpndncy/config.json. Only needs to be run once per machine.

dpndncy login --server https://sca.yourcompany.com --token dpat_xxxxxxxxxxxxxxxx

Generate your token in the dpndncY web UI under Profile → API Tokens. Token format is dpat_ followed by a random string.

dpndncy status

Checks connectivity to your configured server and prints server version, auth status, and queue depth.

dpndncy status

# Or override the server for a one-off check
dpndncy status --server https://sca.yourcompany.com --token dpat_...

CI/CD with the CLI

The --ci flag enables minimal output mode and makes the CLI return exit code 1 on policy violations — perfect for pipeline gates. Store your server URL and token as CI secrets, then download the CLI binary in your pipeline.

Recommended: use the CLI in CI/CD

The CLI handles polling, retry on connection errors, and progress reporting automatically. It's simpler and more reliable than hand-rolling curl polling loops.

GitHub Actions — using the CLI

.github/workflows/security.yml
name: Security Gate

on:
  push:
    branches: [main, develop]
  pull_request:

jobs:
  dpndncy-scan:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Download dpndncY CLI
        run: |
          curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux \
            -o /usr/local/bin/dpndncy
          chmod +x /usr/local/bin/dpndncy

      - name: Configure dpndncY
        run: dpndncy login --server ${{ secrets.DPNDNCY_URL }} --token ${{ secrets.DPNDNCY_TOKEN }}

      - name: Run security scan
        run: dpndncy scan --ci --sca --sast .
        # Exit 0 = passed, Exit 1 = policy fail (fails the build), Exit 2 = error

GitLab CI — using the CLI

.gitlab-ci.yml
security-scan:
  stage: test
  image: ubuntu:22.04
  before_script:
    - apt-get update -q && apt-get install -yq curl
    - curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux
        -o /usr/local/bin/dpndncy
    - chmod +x /usr/local/bin/dpndncy
    - dpndncy login --server $DPNDNCY_URL --token $DPNDNCY_TOKEN
  script:
    - dpndncy scan --ci --sca --sast $CI_PROJECT_DIR
  variables:
    DPNDNCY_URL: https://sca.yourcompany.com

Jenkins Pipeline — using the CLI

Jenkinsfile
pipeline {
  agent any
  environment {
    DPNDNCY_URL   = credentials('dpndncy-url')
    DPNDNCY_TOKEN = credentials('dpndncy-token')
  }
  stages {
    stage('Security Scan') {
      steps {
        sh '''
          curl -sSL https://github.com/dpndncy/cli/releases/latest/download/dpndncy-linux \
            -o /tmp/dpndncy && chmod +x /tmp/dpndncy
          /tmp/dpndncy login --server $DPNDNCY_URL --token $DPNDNCY_TOKEN
          /tmp/dpndncy scan --ci --sca --sast .
        '''
      }
    }
  }
}

Windows CI (PowerShell / Azure DevOps)

azure-pipelines.yml
- task: PowerShell@2
  displayName: 'dpndncY Security Scan'
  inputs:
    targetType: inline
    script: |
      $url = "https://github.com/dpndncy/cli/releases/latest/download/dpndncy-win.exe"
      Invoke-WebRequest -Uri $url -OutFile "dpndncy.exe"
      .\dpndncy.exe login --server $env:DPNDNCY_URL --token $env:DPNDNCY_TOKEN
      .\dpndncy.exe scan --ci --sca .
  env:
    DPNDNCY_URL: $(dpndncyUrl)
    DPNDNCY_TOKEN: $(dpndncyToken)
Caching the CLI binary

For faster pipelines, cache the CLI binary using your CI platform's cache action (GitHub Actions actions/cache, GitLab cache, etc.) keyed on the CLI version number. The binary is ~15 MB compressed and does not change between runs.


GitHub Integration

Connect dpndncY to GitHub to browse repositories and open automated remediation pull requests for vulnerable dependencies.

Setup

  1. Create a GitHub Personal Access Token with repo scope (or a fine-grained token with read/write on contents and pull requests)
  2. Set GITHUB_TOKEN in your configuration and restart the service
  3. Verify the connection: Settings → Integrations → GitHub

Remediation PRs

From any scan result, select affected packages and click Open Remediation PR. dpndncY creates a branch, bumps the vulnerable dependency to the patched version in the manifest and lock file, and opens a PR with full CVE context in the description.

GitLab Integration

Same capabilities as GitHub: repository browsing and automated Merge Requests for vulnerability remediation.

Setup

  1. Create a GitLab Personal Access Token with api scope
  2. Set GITLAB_TOKEN (and GITLAB_URL for self-hosted instances) in configuration
  3. Restart the service

VS Code Extension

The dpndncY VS Code extension shows vulnerability data inline in your manifest files. Vulnerable packages are underlined with severity indicators — hover for CVE detail, CVSS score, and recommended fix version.

Installation

  1. Download dpndncy-security-*.vsix from your dpndncY instance: Settings → VS Code Extension
  2. In VS Code: Extensions → ⋯ → Install from VSIX…

Settings

SettingDescription
dpndncy.serverUrlURL of your dpndncY instance, e.g. https://sca.yourcompany.com
dpndncy.apiTokenPersonal API Token (generate from Profile → API Tokens)
dpndncy.minSeverityMinimum severity to show: LOW / MEDIUM / HIGH / CRITICAL
dpndncy.autoScanScan on file save (default: false)

Notifications

ChannelConfiguration
SlackSet SLACK_WEBHOOK_URL to a Slack Incoming Webhook URL. Notifications sent on scan completion and policy failure.
DiscordSet DISCORD_WEBHOOK_URL to a Discord webhook URL.
EmailConfigure SMTP settings. Emails sent for scan completion, policy failures, and new CRITICAL vulnerabilities.
Custom webhookPOST /api/webhooks — register any HTTP endpoint to receive JSON payloads for scan events. Supports HMAC request signing.

SSO / OIDC

dpndncY supports OIDC-based SSO with Okta, Azure AD, Auth0, and any OIDC-compliant identity provider. Users are provisioned automatically on first login. Role assignment is controlled via OIDC group claims.

OIDC_ISSUER=https://yourorg.okta.com/oauth2/default
OIDC_CLIENT_ID=0oa1b2c3d4e5
OIDC_CLIENT_SECRET=your-client-secret
OIDC_CALLBACK_URL=https://sca.yourcompany.com/auth/oidc/callback

When configured, a Sign in with SSO button appears on the login page. Password-based login for local accounts can be disabled from Settings → Authentication.


User Management

RoleCapabilities
AdminFull access: manage users, integrations, settings, all scans, tokens, audit log
ViewerRead-only: view scan results, findings, SBOM exports. Can call /api/packages/check. Cannot start scans or change settings.

Manage users at Settings → Users or via API:

POST /api/admin/users
Authorization: Bearer <admin-token>
{ "email": "engineer@yourcompany.com", "role": "viewer" }

API Tokens (PAT)

  • Tokens are scoped to the permissions of the creating user
  • The token value is shown once only — store it immediately in your secrets manager
  • Create separate tokens per integration (CI, VS Code, monitoring) for independent revocation
  • Revocation is instant — use Profile → API Tokens or DELETE /api/tokens/:id
  • Audit token usage from Settings → Audit Log

Backup & Restore

All persistent state is in the SQLite database and the scan snapshot directory. Both live on the data volume.

Docker backup

# Backup the data volume to a tar archive
docker run --rm \
  -v dpndncy_data:/data \
  -v $(pwd)/backups:/backups \
  alpine tar czf /backups/dpndncy-$(date +%Y%m%d).tar.gz /data

# Restore
docker run --rm \
  -v dpndncy_data:/data \
  -v $(pwd)/backups:/backups \
  alpine tar xzf /backups/dpndncy-20260309.tar.gz -C /

Kubernetes backup

Backup the PVC using your cluster's volume snapshot mechanism (e.g., Velero, CSI snapshots):

velero backup create dpndncy-backup \
  --include-namespaces security \
  --wait

Windows backup

Stop the service, copy C:\ProgramData\dpndncY\ to a backup location, then restart:

net stop dpndncy
robocopy "C:\ProgramData\dpndncY" "D:\Backups\dpndncy-%date%" /E /COPYALL
net start dpndncy

Upgrades

dpndncY applies database migrations automatically on startup. Always back up your data before upgrading.

Docker

# Pull the new image and recreate the container (data volume is preserved)
docker compose pull
docker compose up -d

Kubernetes

helm repo update
helm upgrade dpndncy dpndncy/dpndncy-platform \
  --namespace security \
  -f values-prod.yaml

Windows

Run the new version's .exe installer over the existing installation. The installer handles the service stop/start and data migration automatically.

Before upgrading

Read the release notes for the new version. Major versions may include breaking API or configuration changes. Back up your data volume before running any upgrade.

Troubleshooting

Container won't start

  • Check that JWT_SECRET is set and non-empty
  • Verify the data volume is mounted and writable by the container process
  • Check logs: docker compose logs dpndncy

Scans return no findings

  • Verify the target path is mounted into the container and accessible
  • Check that a supported manifest file exists in the target directory
  • Ensure outbound HTTPS to api.osv.dev and services.nvd.nist.gov is allowed by your firewall/proxy

SAST scan times out

  • Increase SAST_MAX_RUNTIME_SEC (e.g. 600 for large monorepos)
  • Use deltaOnly: true to limit analysis to changed files
  • Ensure the container has sufficient CPU — SAST is CPU-bound

SSO / OIDC login fails

  • Verify OIDC_CALLBACK_URL matches exactly what's registered in your IdP (including trailing slash if any)
  • Check that the dpndncY instance is reachable at the callback URL from the browser, not just from the server
  • Review the server log for the OIDC error response detail

Windows Service won't start

  • Check the Windows Event Viewer: Application → dpndncY
  • Verify the service account has read/write access to C:\ProgramData\dpndncY
  • Check that the configured port is not in use by another process: netstat -ano | findstr :3000

Viewing logs

# Docker
docker compose logs -f dpndncy

# Kubernetes
kubectl logs -n security -l app=dpndncy -f

# Windows
Get-EventLog -LogName Application -Source dpndncY -Newest 100
Enterprise support

Licensed customers have access to priority support. Contact support@dpndncy.dev with your instance ID (visible in Settings → About) and log output.