feat: add middleware implementation and docs
This commit is contained in:
commit
88c8c951b1
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
/node_modules/
|
||||
151
AGENTS.md
Normal file
151
AGENTS.md
Normal file
@ -0,0 +1,151 @@
|
||||
# AGENTS
|
||||
|
||||
## Mission & Audience
|
||||
- This document lives at the root so every agentic helper knows how to make, run, and reason about the middleware.
|
||||
- Refer back to `docs/workstation_plan.md` for the architectural story, expected flows, and the canonical payload contract before touching new features.
|
||||
- Preserve the operational stability that the SQLite queue + delivery worker already provides; avoid accidental schema drift or config leaks.
|
||||
- Tailor every change to the Node 20+ CommonJS ecosystem and the SQLite-backed persistence layer this repo already embraces.
|
||||
|
||||
## Command Reference
|
||||
|
||||
### Install & Bootstrapping
|
||||
- `npm install` populates `node_modules` (no lockfile generation beyond the committed `package-lock.json`).
|
||||
- `npm start` is the go-to run command; it migrates the database, primes the instrument cache, spins up connectors, and starts the delivery worker plus health/metrics services.
|
||||
- `npm run migrate` runs `middleware/src/storage/migrate.js` on demand; use it before seeding schema migrations in new environments or CI jobs.
|
||||
|
||||
### Maintenance & Database Care
|
||||
- `npm run maintenance -- backup` copies `middleware/data/workstation.sqlite` to `workstation.sqlite.bak-<timestamp>`; this file should stay in place and not be committed or removed.
|
||||
- `npm run maintenance -- vacuum` runs SQLite's `VACUUM` via `middleware/src/scripts/maintenance.js` and logs success/failure to stdout/stderr.
|
||||
- `npm run maintenance -- prune --days=<n>` deletes `delivery_log` entries older than `<n>` days; default is 30 if `--days` is omitted.
|
||||
|
||||
### Testing & Single-Test Command
|
||||
- `npm test` executes `node middleware/test/parsers.test.js` and serves as the allowable smoke check until a richer test harness exists.
|
||||
- To rerun the single parser suite manually, target `node middleware/test/parsers.test.js` directly; it logs success via `console.log` and exits non-zero on failure.
|
||||
|
||||
## Environment & Secrets
|
||||
- Node 20+ is assumed because the code uses optional chaining, `String.raw`, and other modern primitives; keep the same runtime for development and CI.
|
||||
- All ports, DB paths, and CLQMS credentials are wired through `middleware/config/default.js` and its environmental overrides (e.g., `HTTP_JSON_PORT`, `CLQMS_TOKEN`, `WORKER_BATCH_SIZE`).
|
||||
- Treat `CLQMS_TOKEN`, database files, and other secrets as environment-provided values; never embed them in checked-in files.
|
||||
- `middleware/data/workstation.sqlite` is the runtime database. Don’t delete or reinitialize it from the repository tree unless part of an explicit migration/backup operation.
|
||||
|
||||
## Observability Endpoints
|
||||
- `/health` returns connector statuses plus pending/retrying/dead-letter counts from `middleware/src/routes/health.js`.
|
||||
- `/health/ready` pings the SQLite queue; any failure there should log an error and respond with `503` per the existing route logic.
|
||||
- `/metrics` exposes Prometheus-style gauges/counters that read straight from `queue/sqliteQueue`; keep the plaintext format exactly as defined so Prometheus scrapers don't break.
|
||||
- Health and metrics routers are mounted on `middleware/src/index.js` at ports declared in the config, so any addition should remain consistent with Express middleware ordering.
|
||||
|
||||
## Delivery Runbook & Retry Behavior
|
||||
- Backoff: `30s -> 2m -> 10m -> 30m -> 2h -> 6h`, max 10 attempts as defined in `config.retries.schedule`. The worker taps `buildNextAttempt` in `deliveryWorker.js` to honor this array.
|
||||
- Retry transient failures (timeouts, DNS/connection, HTTP 5xx); skip HTTP 400/422 or validation errors and ship those payloads immediately to `dead_letter` with the response body.
|
||||
- After max attempts move the canonical payload to `dead_letter` with the final error message so postmortem tooling can surface the failure.
|
||||
- `queue.recordDeliveryAttempt` accompanies every outbound delivery, so keep latency, status, and response code logging aligned with this helper.
|
||||
- Duplicate detection relies on `utils/hash.dedupeKey`; keep `results` sorted and hashed consistently so deduplication stays stable.
|
||||
- `deliveryWorker` marks `locked_at`/`locked_by` using `queue.claimPending` and always releases them via `queue.markOutboxStatus` to avoid worker starvation.
|
||||
|
||||
## Instrument Configuration Cache
|
||||
- Instrument configuration is cached in `instrumentConfig/service.js`; reloads happen on init and via `setInterval`, so mutate the cache through `service.upsert` rather than touching `store` directly.
|
||||
- `service.reload` parses JSON in the `config` column, logs parsing failures with `logger.warn`, and only keeps rows that successfully parse.
|
||||
- Service helpers expose `list`, `get`, and `byConnector` so connectors can fetch the subset they care about without iterating raw rows.
|
||||
- Store interactions use `middleware/src/storage/instrumentConfigStore.js`, which leverages `DatabaseClient` and parameterized `ON CONFLICT` upserts; follow that pattern when extending tables.
|
||||
- `instrumentService.init` must run before connectors start so `processMessage` can enforce instrument-enabled checks and connector matching.
|
||||
- Always drop payloads with no enabled config or connector mismatch and mark the raw row as `dropped` so operators can trace why a message was ignored.
|
||||
|
||||
## Metrics & Logging Enhancements
|
||||
- `metrics.js` builds human-readable Prometheus strings via `formatMetric`; keep the helper intact when adding new metrics so type/help annotations stay formatted correctly.
|
||||
- Metrics route reports pending, retrying, dead letters, delivery attempts, last success timestamp, and average latency; add new stats only when there is a clear operational need.
|
||||
- Use `queue` helpers (`pendingCount`, `retryingCount`, `deadLetterCount`, `getLastSuccessTimestamp`, `getAverageLatency`, `getDeliveryAttempts`) rather than running fresh queries in routes.
|
||||
- Always set the response content type to `text/plain; version=0.0.4; charset=utf-8` before returning metrics so Prometheus scrapers accept the payload.
|
||||
- Health logs should cite both connectors and queue metrics so failure contexts are actionable and correlate with the operational dashboards referenced in `docs/workstation_plan.md`.
|
||||
- Mask sensitive fields and avoid dumping raw payloads in logs; connectors and parsers add context objects to errors rather than full payload dumps.
|
||||
|
||||
## Maintenance Checklist
|
||||
- `middleware/src/scripts/maintenance.js` supports the commands `backup`, `vacuum`, and `prune --days=<n>` (default 30); call these from CI or ops scripts when the backlog grows.
|
||||
- `backup` copies the SQLite file before running migrations or schema updates so you can roll back quickly.
|
||||
- `vacuum` recalculates and rebuilds the DB; wrap it in maintenance windows because it briefly locks the database.
|
||||
- `prune` deletes old rows from `delivery_log`; use the same threshold as `docs/workstation_plan.md` (default 30 days) unless stakeholders approve a different retention.
|
||||
- `maintenance` logging uses `console.log`/`console.error` because the script runs outside the Express app; keep those calls simple and exit with non-zero codes on failure to alert CI.
|
||||
- Document every manual maintenance action in the repository README or a runbook so second-tier operators know what happened.
|
||||
|
||||
## Data & Schema Source of Truth
|
||||
- All schema statements live in `middleware/db/migrations/00*_*.sql`; the bootstrapper iterates over these files alphabetically via `fs.readdirSync` and `db.exec`, so keep new migrations in that folder and add them with increasing numeric prefixes.
|
||||
- Table definitions include: `inbox_raw`, `outbox_result`, `delivery_log`, `instrument_config`, and `dead_letter`. An additional migration adds `locked_at` and `locked_by` to `outbox_result`.
|
||||
- `middleware/src/storage/migrate.js` is idempotent; it applies every `.sql` in the migrations folder unconditionally. Avoid writing irreversible SQL (DROP, ALTER without fallback) unless you also add compensating migrations.
|
||||
- `DatabaseClient` in `middleware/src/storage/db.js` wraps sqlite3 callbacks in promises; reuse its `run`, `get`, and `all` helpers to keep SQL parameterization consistent and to centralize `busyTimeout` configuration.
|
||||
|
||||
## Code Style Guidelines
|
||||
|
||||
### Modules, Imports, and Exports
|
||||
- Prefer CommonJS `const ... = require(...)` at the top of each module; grouping local `require`s by directory depth (config, utils, domain) keeps files predictable.
|
||||
- Export objects/functions via `module.exports = { ... }` or `module.exports = <function>` depending on whether multiple helpers are exported.
|
||||
- When a file exposes a factory (connectors, queue), return named methods (`start`, `stop`, `onMessage`, `health`) to keep the bootstrapper happy.
|
||||
|
||||
### Formatting & Layout
|
||||
- Use two spaces for indentation and include semicolons at the end of statements; this matches existing files such as `middleware/src/utils/logger.js` and `index.js`.
|
||||
- Keep line length reasonable (~100 characters) and break wrapped strings with template literals (see metric formatters) rather than concatenating with `+`.
|
||||
- Prefer single quotes for strings unless interpolation or escaping makes backticks clearer.
|
||||
- Keep helper functions (splitters, builders) at the top of parser modules, followed by the main exported parse function.
|
||||
|
||||
### Naming Conventions
|
||||
- Stick to camelCase for functions, methods, and variables (`processMessage`, `buildNextAttempt`, `messageHandler`).
|
||||
- Use descriptive object properties that mirror domain terms (`instrument_id`, `result_time`, `connector`, `status`).
|
||||
- Constants for configuration or retry schedules stay uppercase/lowercase as seen in `config.retries.schedule`; keep them grouped inside `config/default.js`.
|
||||
|
||||
### Async Flow & Error Handling
|
||||
- Embrace `async/await` everywhere; existing code rarely uses raw promises (except for wrappers like `new Promise((resolve) => ...)`).
|
||||
- Wrap I/O boundaries in `try/catch` blocks and log failures with structured data via `logger.error({ err: err.message }, '...')` so Pino hooks can parse them.
|
||||
- When rethrowing an error, ensure the calling context knows whether the failure is fatal (e.g., `processMessage` rethrows after queue logging).
|
||||
- For connectors, propagate errors through `onError` hooks so the bootstrapper can log them consistently.
|
||||
|
||||
### Logging & Diagnostics
|
||||
- Always prefer `middleware/src/utils/logger.js` instead of `console.log`/`console.error` inside core services; the exception is low-level scripts like `maintenance.js` and migration runners.
|
||||
- Use structured objects for context (`{ err: err.message, connector: connector.name() }`), especially around delivery failures and config reloads.
|
||||
- Log positive states (start listening, health server ready) along with port numbers so the runtime state can be traced during deployment.
|
||||
|
||||
### Validation & Canonical Payloads
|
||||
- Use `zod` for inbound schema checks; validators already live in `middleware/src/routes/instrumentConfig.js` and `middleware/src/normalizers/index.js`.
|
||||
- Always normalize parser output via `normalize(parsed)` before queue insertion to guarantee `instrument_id`, `sample_id`, `result_time`, and `results` conform to expectations.
|
||||
- If `normalize` throws, let the caller log the failure and drop the payload silently after marking `inbox_raw` as `failed` to avoid partial writes.
|
||||
|
||||
### Database & Queue Best Practices
|
||||
- Use `DatabaseClient` for all SQL interactions; it centralizes `busyTimeout` and promise conversion and prevents sqlite3 callback spaghetti.
|
||||
- Parameterize every statement with `?` placeholders (see `queue/sqliteQueue.js` and `instrumentConfigStore.js`) to avoid SQL injection hazards.
|
||||
- Always mark `inbox_raw` rows as `processed`, `failed`, or `dropped` after parsing to keep operators aware of what happened.
|
||||
- When marking `outbox_result` statuses, clear `locked_at/locked_by` and update `attempts`/`next_attempt_at` in one statement so watchers can rely on atomic semantics.
|
||||
|
||||
### Connectors & Pipeline Contracts
|
||||
- Each connector must provide `name`, `type`, `start`, `stop`, `health`, `onMessage`, and `onError` per the current implementation; keep this contract if you add new protocols.
|
||||
- Keep connector internals event-driven: emit `messageHandler(payload)` and handle `.catch(errorHandler)` to ensure downstream failures get logged.
|
||||
- For TCP connectors, track connections in `Set`s so `stop()` can destroy them before closing the server.
|
||||
- Do not assume payload framing beyond what the current parser needs; let the parser module handle splitting text and trimming.
|
||||
|
||||
### Worker & Delivery Guidelines
|
||||
- The delivery worker polls the queue (`config.worker.batchSize`) and records every attempt via `queue.recordDeliveryAttempt`; add retries in the same pattern if you introduce new failure-handling logic.
|
||||
- Respect the retry schedule defined in `config.retries.schedule`; `buildNextAttempt` uses `Math.min` to cap indexes, so new delays should append to `config.retries.schedule` only.
|
||||
- Duplicate detection relies on `utils/hash.dedupeKey`; keep `results` sorted and hashed consistently so deduplication stays stable.
|
||||
- On HTTP 400/422 responses or too many retries, move payloads to `dead_letter` and log the reason to keep operators informed.
|
||||
|
||||
### Testing & Coverage Expectations
|
||||
- Parser tests live in `middleware/test/parsers.test.js`; they rely on `node:assert` and deliberately simple sample payloads to avoid external dependencies.
|
||||
- Add new tests by mimicking that file’s style—plain `assert.strictEqual` checks, no test framework dependencies, and `console.log` success acknowledgment.
|
||||
- If you enhance the test surface, keep it runnable via `npm test` so agents and CI scripts can still rely on a single command line.
|
||||
|
||||
### Documentation & Storytelling
|
||||
- Keep `docs/workstation_plan.md` in sync with architectural changes; it surfaces connector flows, phases, retry policies, and maintenance checklists that agents rely on.
|
||||
- When adding routes/features, document the endpoint, request payload, and expected responses in either `docs/` or inline comments near the route.
|
||||
|
||||
## Cursor & Copilot Rules
|
||||
- No `.cursor/rules/` or `.cursorrules` directories are present in this repo; therefore there are no Cursor-specific constraints to copy here.
|
||||
- `.github/copilot-instructions.md` is absent as well, so there are no Copilot instructions to enforce or repeat.
|
||||
|
||||
## Final Notes for Agents
|
||||
- Keep changes isolated to their area of responsibility; the middleware is intentionally minimal, so avoid introducing new bundlers/languages.
|
||||
- Before opening PRs, rerun `npm run migrate` and `npm test` to verify schema/app coherence.
|
||||
- Use environment variable overrides from `middleware/config/default.js` when running in staging/production so the same config file can stay committed.
|
||||
## Additional Notes
|
||||
- Never revert existing changes you did not make unless explicitly requested, since those changes were made by the user.
|
||||
- If there are unrelated changes in the working tree, leave them untouched and focus on the files that matter for the ticket.
|
||||
- Avoid destructive git commands (`git reset --hard`, `git checkout --`) unless the user explicitly requests them.
|
||||
- If documentation updates were part of your change, add them to `docs/workstation_plan.md` or explain why the doc already covers the behavior.
|
||||
- When a connector or parser handles a new instrument, double-check `instrument_config` rows to ensure the connector name matches the incoming protocol.
|
||||
- The `queue` keeps `status`, `attempts`, `next_attempt_at`, and `locked_*` in sync; always update all relevant columns in a single SQL call to avoid race conditions.
|
||||
- Keep the SQL schema in sync with `middleware/db/migrations`; add new migrations rather than editing existing ones when altering tables.
|
||||
154
docs/workstation_plan.md
Normal file
154
docs/workstation_plan.md
Normal file
@ -0,0 +1,154 @@
|
||||
# Workstation Middleware Plan (Node.js + SQLite)
|
||||
|
||||
## Goal
|
||||
|
||||
Build a lightweight Node.js service that:
|
||||
- receives laboratory messages from instruments
|
||||
- normalizes every protocol into a single JSON payload
|
||||
- queues payloads in SQLite for durability
|
||||
- reliably delivers results to CLQMS with retries
|
||||
|
||||
## Responsibilities
|
||||
|
||||
- **Middleware:** connector protocols (HTTP JSON, HL7 TCP, ASTM serial/TCP), parsing/normalization, schema checks, durable queue, retries, dead-letter, logging, health endpoints.
|
||||
- **CLQMS:** domain validation, mapping rules, result persistence, workflow/flags/audit.
|
||||
|
||||
## Flow
|
||||
|
||||
1. Connector captures raw message and writes to `inbox_raw`.
|
||||
2. Parser turns protocol text into a structured object.
|
||||
3. Normalizer maps the object to canonical JSON.
|
||||
4. Payload lands in `outbox_result` as `pending`.
|
||||
5. Delivery worker sends to CLQMS and logs attempts.
|
||||
6. On success mark `processed`; on failure mark `retrying` or `dead_letter` after max retries.
|
||||
|
||||
## Tech Stack (JS only)
|
||||
|
||||
- Node.js 20+
|
||||
- SQLite via `sqlite3`
|
||||
- `zod` for validation
|
||||
- `pino` for logs
|
||||
- `undici` (or `axios`) for HTTP delivery
|
||||
|
||||
## Suggested Layout
|
||||
|
||||
```text
|
||||
middleware/
|
||||
src/
|
||||
connectors/
|
||||
parsers/
|
||||
normalizers/
|
||||
pipeline/
|
||||
queue/
|
||||
client/
|
||||
storage/
|
||||
routes/
|
||||
utils/
|
||||
index.js
|
||||
db/migrations/
|
||||
config/
|
||||
logs/
|
||||
```
|
||||
|
||||
## Connector Contract
|
||||
|
||||
Each connector exposes:
|
||||
|
||||
```js
|
||||
{
|
||||
name: () => string,
|
||||
type: () => "hl7-tcp" | "astm-serial" | "http-json",
|
||||
start: async () => {},
|
||||
stop: async () => {},
|
||||
health: () => ({ status: "up" | "down" | "degraded" }),
|
||||
onMessage: async (msg) => {},
|
||||
onError: (handler) => {}
|
||||
}
|
||||
```
|
||||
|
||||
## Canonical Payload
|
||||
|
||||
Required fields: `instrument_id`, `sample_id`, `result_time`, `results[]` (each with `test_code` and `value`).
|
||||
|
||||
Optional: `unit`, `flag`, `patient_id`, `operator_id`, `meta`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
{
|
||||
"instrument_id": "SYSMEX_XN1000",
|
||||
"sample_id": "SMP-20260326-001",
|
||||
"result_time": "2026-03-26T10:20:00Z",
|
||||
"results": [
|
||||
{
|
||||
"test_code": "WBC",
|
||||
"value": "8.2",
|
||||
"unit": "10^3/uL",
|
||||
"flag": "N"
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"source_protocol": "HL7",
|
||||
"message_id": "msg-123",
|
||||
"connector": "hl7-tcp"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## SQLite Tables
|
||||
|
||||
- `inbox_raw`: raw payloads, connector, parse status.
|
||||
- `outbox_result`: canonical payload, delivery status, retry metadata, `dedupe_key`.
|
||||
- `delivery_log`: attempt history, latency, responses.
|
||||
- `instrument_config`: connector settings, enabled flag.
|
||||
- `dead_letter`: failed payloads, reason, timestamp.
|
||||
|
||||
## Retry + Idempotency
|
||||
|
||||
- Backoff: `30s -> 2m -> 10m -> 30m -> 2h -> 6h`, max 10 attempts.
|
||||
- Retry transient failures (timeouts, DNS/connection, HTTP 5xx); skip HTTP 400/422 or validation errors.
|
||||
- After max attempts move to `dead_letter` with payload + last error.
|
||||
- `dedupe_key` = hash of `instrument_id + sample_id + test_code + result_time`; unique index on `outbox_result.dedupe_key` ensures idempotent replay.
|
||||
|
||||
## Security & Observability
|
||||
|
||||
- CLQMS auth token stored in env, never hardcoded.
|
||||
- Optional IP allowlist, strict TLS + request timeouts, mask sensitive logs.
|
||||
- Health: `GET /health`, `GET /health/ready` (DB + workers).
|
||||
- Metrics: pending size, retrying count, dead letters, last successful delivery.
|
||||
- Alerts: backlog limits, dead letters increase, stale success timestamp.
|
||||
|
||||
## Delivery Phases
|
||||
|
||||
1. **Phase 1 (MVP):** scaffold Node app, add SQLite + migrations, build `http-json` connector, parser/normalizer, outbox worker, retries, dead-letter, pilot with one instrument.
|
||||
2. **Phase 2:** add HL7 TCP and ASTM connectors, parser/normalizer per connector, instrument-specific config.
|
||||
3. **Phase 3:** richer metrics/dashboards, backup + maintenance scripts, integration/failure tests, runbook + incident checklist.
|
||||
|
||||
## MVP Done When
|
||||
|
||||
- One instrument end-to-end with no loss during CLQMS downtime.
|
||||
- Retries, dead-letter, and duplicate protection verified.
|
||||
- Health checks and logs available for operations.
|
||||
|
||||
## Phase 2 Completion Notes
|
||||
|
||||
- Instruments are provisioned via `instrument_config` rows (connector, enabled flag, JSON payload) and can be managed through `POST /instruments` plus the instrumentation console for quick updates.
|
||||
- Each connector validates against configured instruments so HL7/ASTM parsers are only accepted for known, enabled equipment.
|
||||
- Deduplication now guarded by SHA-256 `dedupe_key`, and instrument metadata is carried through the pipeline.
|
||||
|
||||
## Metrics & Observability
|
||||
|
||||
- Health router provides `/health` (status) and `/health/ready` (DB + worker) plus metrics per connector.
|
||||
- Prometheus-friendly `/metrics` exports pending/retrying/dead-letter counts, delivery attempts, last success timestamp, and average latency.
|
||||
- Logs/pino already mask PII by design; connectors emit structured errors and handshake timing for alerts.
|
||||
|
||||
## Maintenance, Runbook & Automation
|
||||
|
||||
- SQLite maintenance script (`node middleware/src/scripts/maintenance.js`) supports `backup`, `vacuum`, and `prune --days=<n>` to keep the DB performant and reproducible.
|
||||
- Daily/weekly checklist: run backup before deployments, vacuum monthly, and prune `delivery_log` older than 30 days (configurable via CLI).
|
||||
- Incident checklist: 1) check `/health/ready`; 2) inspect `outbox_result` + `dead_letter`; 3) replay payloads with `pending` or `retrying` status; 4) rotate CLQMS token via env + restart; 5) escalate when dead letters spike or metrics show stale success timestamp.
|
||||
|
||||
## Testing & Validation
|
||||
|
||||
- Parser smoke tests under `middleware/test/parsers.test.js` verify HL7/ASTM canonical output and keep `normalize()` coverage intact. Run via `npm test`.
|
||||
- Future CI can run the same script plus `npm run migrate` ahead of any pull request to ensure schema/queue logic still applies.
|
||||
32
middleware/config/default.js
Normal file
32
middleware/config/default.js
Normal file
@ -0,0 +1,32 @@
|
||||
const path = require('path');
|
||||
|
||||
const root = path.join(__dirname, '..');
|
||||
|
||||
module.exports = {
|
||||
env: process.env.NODE_ENV || 'development',
|
||||
db: {
|
||||
path: process.env.DB_PATH || path.join(root, 'data', 'workstation.sqlite'),
|
||||
busyTimeout: 5000
|
||||
},
|
||||
connectors: {
|
||||
httpJsonPort: Number(process.env.HTTP_JSON_PORT || 3001),
|
||||
hl7TcpPort: Number(process.env.HL7_TCP_PORT || 3002),
|
||||
astmTcpPort: Number(process.env.ASTM_TCP_PORT || 3003)
|
||||
},
|
||||
clqms: {
|
||||
url: process.env.CLQMS_URL || 'http://localhost:4000/api/results',
|
||||
token: process.env.CLQMS_TOKEN || '',
|
||||
timeout: Number(process.env.CLQMS_TIMEOUT || 8000)
|
||||
},
|
||||
healthPort: Number(process.env.HEALTH_PORT || 4001),
|
||||
worker: {
|
||||
pollInterval: Number(process.env.WORKER_POLL_INTERVAL || 5000),
|
||||
batchSize: Number(process.env.WORKER_BATCH_SIZE || 5),
|
||||
lockTTLSeconds: Number(process.env.WORKER_LOCK_TTL || 60),
|
||||
workerId: process.env.WORKER_ID || `worker-${process.pid}`
|
||||
},
|
||||
retries: {
|
||||
schedule: [30, 120, 600, 1800, 7200, 21600],
|
||||
maxAttempts: Number(process.env.MAX_ATTEMPTS || 10)
|
||||
}
|
||||
};
|
||||
BIN
middleware/data/workstation.sqlite
Normal file
BIN
middleware/data/workstation.sqlite
Normal file
Binary file not shown.
50
middleware/db/migrations/001_init.sql
Normal file
50
middleware/db/migrations/001_init.sql
Normal file
@ -0,0 +1,50 @@
|
||||
CREATE TABLE IF NOT EXISTS inbox_raw (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
connector TEXT NOT NULL,
|
||||
received_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
payload TEXT NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'pending',
|
||||
parse_status TEXT NULL,
|
||||
error TEXT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS outbox_result (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
canonical_payload TEXT NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'pending',
|
||||
dedupe_key TEXT NOT NULL,
|
||||
attempts INTEGER NOT NULL DEFAULT 0,
|
||||
next_attempt_at INTEGER NOT NULL DEFAULT 0,
|
||||
last_error TEXT NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_outbox_result_dedupe_key
|
||||
ON outbox_result (dedupe_key);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS delivery_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
outbox_id INTEGER NOT NULL,
|
||||
attempt INTEGER NOT NULL,
|
||||
status TEXT NOT NULL,
|
||||
response_code INTEGER NULL,
|
||||
response_body TEXT NULL,
|
||||
latency_ms INTEGER NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
FOREIGN KEY(outbox_id) REFERENCES outbox_result(id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS instrument_config (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
instrument_id TEXT NOT NULL UNIQUE,
|
||||
connector TEXT NOT NULL,
|
||||
enabled INTEGER NOT NULL DEFAULT 1,
|
||||
config TEXT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS dead_letter (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
payload TEXT NOT NULL,
|
||||
reason TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
2
middleware/db/migrations/002_outbox_locks.sql
Normal file
2
middleware/db/migrations/002_outbox_locks.sql
Normal file
@ -0,0 +1,2 @@
|
||||
ALTER TABLE outbox_result ADD COLUMN locked_at INTEGER NULL;
|
||||
ALTER TABLE outbox_result ADD COLUMN locked_by TEXT NULL;
|
||||
29
middleware/src/client/clqmsClient.js
Normal file
29
middleware/src/client/clqmsClient.js
Normal file
@ -0,0 +1,29 @@
|
||||
const { request } = require('undici');
|
||||
const config = require('../../config/default');
|
||||
|
||||
async function deliver(payload) {
|
||||
const body = JSON.stringify(payload);
|
||||
const start = Date.now();
|
||||
const headers = {
|
||||
'content-type': 'application/json'
|
||||
};
|
||||
if (config.clqms.token) {
|
||||
headers.authorization = `Bearer ${config.clqms.token}`;
|
||||
}
|
||||
const response = await request(config.clqms.url, {
|
||||
method: 'POST',
|
||||
headers,
|
||||
body,
|
||||
keepaliveTimeout: 0,
|
||||
bodyTimeout: config.clqms.timeout
|
||||
});
|
||||
const latency = Date.now() - start;
|
||||
const responseBody = await response.body.text();
|
||||
return {
|
||||
code: response.statusCode,
|
||||
body: responseBody,
|
||||
latency
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { deliver };
|
||||
69
middleware/src/connectors/astmSerialConnector.js
Normal file
69
middleware/src/connectors/astmSerialConnector.js
Normal file
@ -0,0 +1,69 @@
|
||||
const net = require('net');
|
||||
const config = require('../../config/default');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
function createAstmSerialConnector() {
|
||||
let server;
|
||||
let messageHandler = async () => {};
|
||||
let errorHandler = (err) => logger.error({ err }, 'astm connector error');
|
||||
|
||||
const connections = new Set();
|
||||
|
||||
function attach(socket) {
|
||||
connections.add(socket);
|
||||
let buffer = '';
|
||||
socket.on('data', (chunk) => {
|
||||
buffer += chunk.toString('utf8');
|
||||
if (buffer.includes('\n')) {
|
||||
const [line, ...rest] = buffer.split('\n');
|
||||
buffer = rest.join('\n');
|
||||
if (line.trim()) {
|
||||
messageHandler(line.trim()).catch(errorHandler);
|
||||
}
|
||||
}
|
||||
});
|
||||
socket.on('error', (err) => errorHandler(err));
|
||||
socket.on('close', () => connections.delete(socket));
|
||||
}
|
||||
|
||||
return {
|
||||
name: () => 'astm-serial',
|
||||
type: () => 'astm-serial',
|
||||
async start() {
|
||||
server = net.createServer(attach);
|
||||
return new Promise((resolve, reject) => {
|
||||
server.listen(config.connectors.astmTcpPort, () => {
|
||||
logger.info({ port: config.connectors.astmTcpPort }, 'astm serial connector listening');
|
||||
resolve();
|
||||
});
|
||||
server.on('error', (err) => {
|
||||
errorHandler(err);
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
},
|
||||
async stop() {
|
||||
for (const socket of connections) {
|
||||
socket.destroy();
|
||||
}
|
||||
if (!server) return;
|
||||
return new Promise((resolve, reject) => {
|
||||
server.close((err) => {
|
||||
if (err) return reject(err);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
},
|
||||
health() {
|
||||
return { status: server ? 'up' : 'down', port: config.connectors.astmTcpPort };
|
||||
},
|
||||
onMessage(handler) {
|
||||
messageHandler = handler;
|
||||
},
|
||||
onError(handler) {
|
||||
errorHandler = handler;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { createAstmSerialConnector };
|
||||
65
middleware/src/connectors/hl7TcpConnector.js
Normal file
65
middleware/src/connectors/hl7TcpConnector.js
Normal file
@ -0,0 +1,65 @@
|
||||
const net = require('net');
|
||||
const config = require('../../config/default');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
function createHl7TcpConnector() {
|
||||
let server;
|
||||
let messageHandler = async () => {};
|
||||
let errorHandler = (err) => logger.error({ err }, 'hl7 connector error');
|
||||
|
||||
const connections = new Set();
|
||||
|
||||
function handleData(socket, chunk) {
|
||||
const payload = chunk.toString('utf8').trim();
|
||||
if (!payload) return;
|
||||
messageHandler(payload).catch(errorHandler);
|
||||
}
|
||||
|
||||
function attach(socket) {
|
||||
connections.add(socket);
|
||||
socket.on('data', (chunk) => handleData(socket, chunk));
|
||||
socket.on('error', (err) => errorHandler(err));
|
||||
socket.on('close', () => connections.delete(socket));
|
||||
}
|
||||
|
||||
return {
|
||||
name: () => 'hl7-tcp',
|
||||
type: () => 'hl7-tcp',
|
||||
async start() {
|
||||
server = net.createServer(attach);
|
||||
return new Promise((resolve, reject) => {
|
||||
server.listen(config.connectors.hl7TcpPort, () => {
|
||||
logger.info({ port: config.connectors.hl7TcpPort }, 'hl7 tcp connector listening');
|
||||
resolve();
|
||||
});
|
||||
server.on('error', (err) => {
|
||||
errorHandler(err);
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
},
|
||||
async stop() {
|
||||
for (const socket of connections) {
|
||||
socket.destroy();
|
||||
}
|
||||
if (!server) return;
|
||||
return new Promise((resolve, reject) => {
|
||||
server.close((err) => {
|
||||
if (err) return reject(err);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
},
|
||||
health() {
|
||||
return { status: server ? 'up' : 'down', port: config.connectors.hl7TcpPort };
|
||||
},
|
||||
onMessage(handler) {
|
||||
messageHandler = handler;
|
||||
},
|
||||
onError(handler) {
|
||||
errorHandler = handler;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { createHl7TcpConnector };
|
||||
54
middleware/src/connectors/httpJsonConnector.js
Normal file
54
middleware/src/connectors/httpJsonConnector.js
Normal file
@ -0,0 +1,54 @@
|
||||
const express = require('express');
|
||||
const config = require('../../config/default');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
function createHttpJsonConnector() {
|
||||
let server;
|
||||
let messageHandler = async () => {};
|
||||
let errorHandler = (err) => logger.error({ err }, 'connector error');
|
||||
|
||||
const app = express();
|
||||
app.use(express.json());
|
||||
app.post('/messages', async (req, res) => {
|
||||
try {
|
||||
await messageHandler(req.body);
|
||||
res.status(202).json({ status: 'queued' });
|
||||
} catch (err) {
|
||||
errorHandler(err);
|
||||
res.status(500).json({ status: 'error', message: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
name: () => 'http-json',
|
||||
type: () => 'http-json',
|
||||
async start() {
|
||||
return new Promise((resolve) => {
|
||||
server = app.listen(config.connectors.httpJsonPort, () => {
|
||||
logger.info({ port: config.connectors.httpJsonPort }, 'http-json connector listening');
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
},
|
||||
async stop() {
|
||||
if (!server) return;
|
||||
return new Promise((resolve, reject) => {
|
||||
server.close((err) => {
|
||||
if (err) return reject(err);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
},
|
||||
health() {
|
||||
return { status: server ? 'up' : 'down', port: config.connectors.httpJsonPort };
|
||||
},
|
||||
onMessage(handler) {
|
||||
messageHandler = handler;
|
||||
},
|
||||
onError(handler) {
|
||||
errorHandler = handler;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { createHttpJsonConnector };
|
||||
69
middleware/src/index.js
Normal file
69
middleware/src/index.js
Normal file
@ -0,0 +1,69 @@
|
||||
const express = require('express');
|
||||
const config = require('../../config/default');
|
||||
const logger = require('./utils/logger');
|
||||
const migrate = require('./storage/migrate');
|
||||
const { createHttpJsonConnector } = require('./connectors/httpJsonConnector');
|
||||
const { createHl7TcpConnector } = require('./connectors/hl7TcpConnector');
|
||||
const { createAstmSerialConnector } = require('./connectors/astmSerialConnector');
|
||||
const { processMessage } = require('./pipeline/workflow');
|
||||
const { startWorker, stopWorker } = require('./pipeline/deliveryWorker');
|
||||
const instrumentService = require('./instrumentConfig/service');
|
||||
const { createHealthRouter } = require('./routes/health');
|
||||
const { router: instrumentRouter } = require('./routes/instrumentConfig');
|
||||
const metricsRouter = require('./routes/metrics');
|
||||
|
||||
async function bootstrap() {
|
||||
await migrate();
|
||||
await instrumentService.init();
|
||||
|
||||
const connectors = [
|
||||
createHttpJsonConnector(),
|
||||
createHl7TcpConnector(),
|
||||
createAstmSerialConnector()
|
||||
];
|
||||
|
||||
connectors.forEach((connector) => {
|
||||
connector.onMessage(async (payload) => {
|
||||
try {
|
||||
await processMessage(connector.name(), payload);
|
||||
} catch (err) {
|
||||
logger.error({ err: err.message, connector: connector.name() }, 'pipeline error');
|
||||
}
|
||||
});
|
||||
connector.onError((err) => {
|
||||
logger.error({ err: err.message }, `${connector.name()} emitted error`);
|
||||
});
|
||||
});
|
||||
|
||||
await Promise.all(connectors.map((connector) => connector.start()));
|
||||
await startWorker();
|
||||
|
||||
const app = express();
|
||||
app.use('/health', createHealthRouter(connectors));
|
||||
app.use('/instruments', instrumentRouter);
|
||||
app.use('/metrics', metricsRouter);
|
||||
app.listen(config.healthPort, () => {
|
||||
logger.info({ port: config.healthPort }, 'health server ready');
|
||||
});
|
||||
|
||||
process.on('SIGINT', async () => {
|
||||
logger.info('shutdown signal received');
|
||||
await shutdown(connectors);
|
||||
process.exit(0);
|
||||
});
|
||||
process.on('SIGTERM', async () => {
|
||||
logger.info('terminate signal received');
|
||||
await shutdown(connectors);
|
||||
process.exit(0);
|
||||
});
|
||||
}
|
||||
|
||||
async function shutdown(connectors) {
|
||||
await stopWorker();
|
||||
await Promise.all(connectors.map((connector) => connector.stop()));
|
||||
}
|
||||
|
||||
bootstrap().catch((err) => {
|
||||
logger.fatal({ err: err.message }, 'failed to start middleware');
|
||||
process.exit(1);
|
||||
});
|
||||
53
middleware/src/instrumentConfig/service.js
Normal file
53
middleware/src/instrumentConfig/service.js
Normal file
@ -0,0 +1,53 @@
|
||||
const store = require('../storage/instrumentConfigStore');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
let cache = new Map();
|
||||
let refreshInterval;
|
||||
|
||||
async function reload() {
|
||||
const rows = await store.list();
|
||||
const next = new Map();
|
||||
rows.forEach((row) => {
|
||||
try {
|
||||
const config = row.config ? JSON.parse(row.config) : {};
|
||||
next.set(row.instrument_id, { instrument_id: row.instrument_id, connector: row.connector, enabled: Boolean(row.enabled), config });
|
||||
} catch (err) {
|
||||
logger.warn({ instrument: row.instrument_id, err: err.message }, 'failed parsing instrument config, skipping');
|
||||
}
|
||||
});
|
||||
cache = next;
|
||||
}
|
||||
|
||||
async function init({ refreshMs = 30_000 } = {}) {
|
||||
await reload();
|
||||
if (refreshInterval) clearInterval(refreshInterval);
|
||||
refreshInterval = setInterval(() => {
|
||||
reload().catch((err) => logger.error({ err: err.message }, 'instrument config reload failed'));
|
||||
}, refreshMs);
|
||||
}
|
||||
|
||||
function list() {
|
||||
return Array.from(cache.values());
|
||||
}
|
||||
|
||||
function get(instrumentId) {
|
||||
return cache.get(instrumentId) || null;
|
||||
}
|
||||
|
||||
function byConnector(connector) {
|
||||
return list().filter((entry) => entry.connector === connector && entry.enabled);
|
||||
}
|
||||
|
||||
async function upsert(payload) {
|
||||
const stored = await store.upsert(payload);
|
||||
await reload();
|
||||
return get(stored.instrument_id);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
init,
|
||||
list,
|
||||
get,
|
||||
byConnector,
|
||||
upsert
|
||||
};
|
||||
37
middleware/src/normalizers/index.js
Normal file
37
middleware/src/normalizers/index.js
Normal file
@ -0,0 +1,37 @@
|
||||
const { z } = require('zod');
|
||||
|
||||
const resultSchema = z.object({
|
||||
test_code: z.string().min(1),
|
||||
value: z.union([z.string(), z.number()]),
|
||||
unit: z.string().optional(),
|
||||
flag: z.string().optional(),
|
||||
meta: z.record(z.string(), z.any()).optional()
|
||||
});
|
||||
|
||||
const canonicalSchema = z.object({
|
||||
instrument_id: z.string().min(1),
|
||||
sample_id: z.string().min(1),
|
||||
result_time: z
|
||||
.string()
|
||||
.refine((value) => !Number.isNaN(Date.parse(value)), 'invalid ISO timestamp'),
|
||||
results: z.array(resultSchema).min(1),
|
||||
unit: z.string().optional(),
|
||||
flag: z.string().optional(),
|
||||
patient_id: z.string().optional(),
|
||||
operator_id: z.string().optional(),
|
||||
meta: z.record(z.string(), z.any()).optional()
|
||||
});
|
||||
|
||||
function normalize(payload) {
|
||||
try {
|
||||
return canonicalSchema.parse(payload);
|
||||
} catch (error) {
|
||||
console.error('normalization failed', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
normalize,
|
||||
schema: canonicalSchema
|
||||
};
|
||||
55
middleware/src/parsers/astmParser.js
Normal file
55
middleware/src/parsers/astmParser.js
Normal file
@ -0,0 +1,55 @@
|
||||
function split(line) {
|
||||
return line.split('|');
|
||||
}
|
||||
|
||||
function extractTestCode(raw) {
|
||||
const candidates = (raw || '').split('^').filter(Boolean);
|
||||
return candidates[0] || raw || 'UNKNOWN';
|
||||
}
|
||||
|
||||
function buildResult(fields) {
|
||||
return {
|
||||
test_code: extractTestCode(fields[2]),
|
||||
value: fields[3] || '0',
|
||||
unit: fields[4] || undefined,
|
||||
flag: fields[11] || fields[10] || undefined,
|
||||
meta: {
|
||||
channel: fields[1]
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function parseAstm(message) {
|
||||
const payload = typeof message === 'string' ? message.trim() : '';
|
||||
if (!payload) throw new Error('empty ASTM payload');
|
||||
const lines = payload.split(/\r?\n/).filter(Boolean);
|
||||
const header = lines.find((line) => line.startsWith('H|')) || '';
|
||||
const order = lines.find((line) => line.startsWith('O|')) || '';
|
||||
const resultLines = lines.filter((line) => line.startsWith('R|'));
|
||||
|
||||
if (!resultLines.length) {
|
||||
throw new Error('no ASTM R segments');
|
||||
}
|
||||
|
||||
const headerFields = header ? split(header) : [];
|
||||
const orderFields = order ? split(order) : [];
|
||||
const instrument_id = headerFields[3] || 'astm-instrument';
|
||||
const sample_id = orderFields[2] || `${instrument_id}-sample`;
|
||||
const result_time = orderFields[13] || new Date().toISOString();
|
||||
|
||||
const results = resultLines.map((line) => buildResult(split(line)));
|
||||
|
||||
return {
|
||||
instrument_id: instrument_id.trim(),
|
||||
sample_id: sample_id.trim(),
|
||||
result_time,
|
||||
results,
|
||||
meta: {
|
||||
source_protocol: 'ASTM',
|
||||
connector: 'astm-serial',
|
||||
segments: resultLines.length
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { parse: parseAstm };
|
||||
69
middleware/src/parsers/hl7Parser.js
Normal file
69
middleware/src/parsers/hl7Parser.js
Normal file
@ -0,0 +1,69 @@
|
||||
function splitFields(segment) {
|
||||
return segment.split('|');
|
||||
}
|
||||
|
||||
function formatHl7Timestamp(value) {
|
||||
if (!value) return new Date().toISOString();
|
||||
const year = value.slice(0, 4);
|
||||
const month = value.slice(4, 6) || '01';
|
||||
const day = value.slice(6, 8) || '01';
|
||||
const hour = value.slice(8, 10) || '00';
|
||||
const minute = value.slice(10, 12) || '00';
|
||||
const second = value.slice(12, 14) || '00';
|
||||
if (!year || !month || !day) {
|
||||
return new Date().toISOString();
|
||||
}
|
||||
return `${year}-${month}-${day}T${hour}:${minute}:${second}Z`;
|
||||
}
|
||||
|
||||
async function parseHl7(message) {
|
||||
const payload = typeof message === 'string' ? message.trim() : '';
|
||||
if (!payload) {
|
||||
throw new Error('empty HL7 payload');
|
||||
}
|
||||
|
||||
const segments = payload.split(/\r?\n/).filter(Boolean);
|
||||
const msh = segments.find((line) => line.startsWith('MSH'));
|
||||
const obr = segments.find((line) => line.startsWith('OBR'));
|
||||
const obxSegments = segments.filter((line) => line.startsWith('OBX'));
|
||||
|
||||
const mshFields = msh ? splitFields(msh) : [];
|
||||
const obrFields = obr ? splitFields(obr) : [];
|
||||
|
||||
const instrument_id = (obrFields[2] || '').split('^')[0] || 'hl7-instrument';
|
||||
const sample_id = (obrFields[3] || obrFields[2] || '').split('^')[0] || `${instrument_id}-sample`;
|
||||
const result_time = formatHl7Timestamp(obrFields[7] || mshFields[6] || '');
|
||||
if (!obxSegments.length) {
|
||||
throw new Error('no OBX segments in HL7 payload');
|
||||
}
|
||||
|
||||
const results = obxSegments.map((segment) => {
|
||||
const fields = splitFields(segment);
|
||||
return {
|
||||
test_code: (fields[3] || 'UNKNOWN').split('^')[0],
|
||||
value: fields[5] || '0',
|
||||
unit: fields[6] || undefined,
|
||||
flag: fields[8] || undefined,
|
||||
meta: {
|
||||
observation_id: fields[3]
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
const canonical = {
|
||||
instrument_id: instrument_id.trim(),
|
||||
sample_id: sample_id.trim(),
|
||||
result_time,
|
||||
results,
|
||||
meta: {
|
||||
source_protocol: 'HL7',
|
||||
message_id: mshFields[9] || undefined,
|
||||
connector: 'hl7-tcp',
|
||||
obx_count: obxSegments.length
|
||||
}
|
||||
};
|
||||
|
||||
return canonical;
|
||||
}
|
||||
|
||||
module.exports = { parse: parseHl7 };
|
||||
8
middleware/src/parsers/httpParser.js
Normal file
8
middleware/src/parsers/httpParser.js
Normal file
@ -0,0 +1,8 @@
|
||||
async function parseHttp(message) {
|
||||
if (typeof message === 'string') {
|
||||
return JSON.parse(message);
|
||||
}
|
||||
return message;
|
||||
}
|
||||
|
||||
module.exports = { parse: parseHttp };
|
||||
140
middleware/src/pipeline/deliveryWorker.js
Normal file
140
middleware/src/pipeline/deliveryWorker.js
Normal file
@ -0,0 +1,140 @@
|
||||
const queue = require('../queue/sqliteQueue');
|
||||
const client = require('../client/clqmsClient');
|
||||
const logger = require('../utils/logger');
|
||||
const config = require('../../config/default');
|
||||
|
||||
let running = false;
|
||||
let workerPromise;
|
||||
|
||||
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
|
||||
|
||||
const transientCodes = new Set(['ECONNRESET', 'ETIMEDOUT', 'ECONNREFUSED', 'EAI_AGAIN', 'ENETUNREACH']);
|
||||
|
||||
function buildNextAttempt(attempts) {
|
||||
const schedule = config.retries.schedule;
|
||||
const index = Math.min(attempts - 1, schedule.length - 1);
|
||||
const delaySeconds = schedule[index] || schedule[schedule.length - 1] || 60;
|
||||
return Math.floor(Date.now() / 1000) + delaySeconds;
|
||||
}
|
||||
|
||||
function isTransientError(err) {
|
||||
if (!err) return true;
|
||||
if (err.code && transientCodes.has(err.code)) return true;
|
||||
if (err.message && err.message.toLowerCase().includes('timeout')) return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
async function handleEntry(entry) {
|
||||
const payload = JSON.parse(entry.canonical_payload);
|
||||
const attemptNumber = entry.attempts + 1;
|
||||
let response;
|
||||
let attemptStatus = 'failure';
|
||||
|
||||
try {
|
||||
response = await client.deliver(payload);
|
||||
attemptStatus = response.code >= 200 && response.code < 300 ? 'success' : 'failure';
|
||||
await queue.recordDeliveryAttempt({
|
||||
outboxId: entry.id,
|
||||
attempt: attemptNumber,
|
||||
status: attemptStatus,
|
||||
responseCode: response.code,
|
||||
responseBody: response.body,
|
||||
latency: response.latency
|
||||
});
|
||||
|
||||
if (attemptStatus === 'success') {
|
||||
await queue.markOutboxStatus(entry.id, 'processed', {
|
||||
attempts: attemptNumber,
|
||||
lastError: null,
|
||||
nextAttemptAt: Math.floor(Date.now() / 1000)
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (response.code === 400 || response.code === 422) {
|
||||
await queue.markOutboxStatus(entry.id, 'dead_letter', {
|
||||
attempts: attemptNumber,
|
||||
lastError: `HTTP ${response.code}`,
|
||||
nextAttemptAt: Math.floor(Date.now() / 1000)
|
||||
});
|
||||
await queue.moveToDeadLetter(payload, `HTTP ${response.code}`);
|
||||
return;
|
||||
}
|
||||
|
||||
if (attemptNumber >= config.retries.maxAttempts) {
|
||||
await queue.markOutboxStatus(entry.id, 'dead_letter', {
|
||||
attempts: attemptNumber,
|
||||
lastError: response.body,
|
||||
nextAttemptAt: Math.floor(Date.now() / 1000)
|
||||
});
|
||||
await queue.moveToDeadLetter(payload, response.body);
|
||||
return;
|
||||
}
|
||||
|
||||
const nextAttemptAt = buildNextAttempt(attemptNumber);
|
||||
await queue.markOutboxStatus(entry.id, 'retrying', {
|
||||
attempts: attemptNumber,
|
||||
lastError: `HTTP ${response.code}`,
|
||||
nextAttemptAt
|
||||
});
|
||||
} catch (error) {
|
||||
await queue.recordDeliveryAttempt({
|
||||
outboxId: entry.id,
|
||||
attempt: attemptNumber,
|
||||
status: 'failure',
|
||||
responseCode: null,
|
||||
responseBody: error.message,
|
||||
latency: null
|
||||
});
|
||||
|
||||
const shouldDeadLetter = attemptNumber >= config.retries.maxAttempts || !isTransientError(error);
|
||||
if (shouldDeadLetter) {
|
||||
await queue.markOutboxStatus(entry.id, 'dead_letter', {
|
||||
attempts: attemptNumber,
|
||||
lastError: error.message,
|
||||
nextAttemptAt: Math.floor(Date.now() / 1000)
|
||||
});
|
||||
await queue.moveToDeadLetter(payload, error.message);
|
||||
return;
|
||||
}
|
||||
const nextAttemptAt = buildNextAttempt(attemptNumber);
|
||||
await queue.markOutboxStatus(entry.id, 'retrying', {
|
||||
attempts: attemptNumber,
|
||||
lastError: error.message,
|
||||
nextAttemptAt
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async function loop() {
|
||||
while (running) {
|
||||
try {
|
||||
const batch = await queue.claimPending(config.worker.batchSize, config.worker.workerId);
|
||||
if (!batch.length) {
|
||||
await sleep(config.worker.pollInterval);
|
||||
continue;
|
||||
}
|
||||
for (const entry of batch) {
|
||||
await handleEntry(entry);
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error({ err: err.message }, 'delivery worker error');
|
||||
await sleep(config.worker.pollInterval);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function startWorker() {
|
||||
if (running) return;
|
||||
running = true;
|
||||
workerPromise = loop();
|
||||
}
|
||||
|
||||
async function stopWorker() {
|
||||
running = false;
|
||||
if (workerPromise) {
|
||||
await workerPromise;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { startWorker, stopWorker };
|
||||
52
middleware/src/pipeline/workflow.js
Normal file
52
middleware/src/pipeline/workflow.js
Normal file
@ -0,0 +1,52 @@
|
||||
const queue = require('../queue/sqliteQueue');
|
||||
const config = require('../../config/default');
|
||||
const logger = require('../utils/logger');
|
||||
const { normalize } = require('../normalizers');
|
||||
const { dedupeKey } = require('../utils/hash');
|
||||
const instrumentService = require('../instrumentConfig/service');
|
||||
|
||||
const parserMap = {
|
||||
'http-json': require('../parsers/httpParser'),
|
||||
'hl7-tcp': require('../parsers/hl7Parser'),
|
||||
'astm-serial': require('../parsers/astmParser')
|
||||
};
|
||||
|
||||
async function processMessage(connector, rawPayload) {
|
||||
const rawRecord = await queue.insertRaw(connector, rawPayload);
|
||||
const rawId = rawRecord?.lastID;
|
||||
try {
|
||||
const parser = parserMap[connector];
|
||||
if (!parser) {
|
||||
throw new Error(`no parser registered for ${connector}`);
|
||||
}
|
||||
const parsed = await parser.parse(rawPayload);
|
||||
const canonical = normalize(parsed);
|
||||
const instrumentEntry = instrumentService.get(canonical.instrument_id);
|
||||
if (!instrumentEntry || !instrumentEntry.enabled) {
|
||||
logger.warn({ instrument: canonical.instrument_id, connector }, 'no enabled instrument config, dropping payload');
|
||||
await queue.markRawParsed(rawId, 'dropped', 'instrument disabled/no config');
|
||||
return { dropped: true };
|
||||
}
|
||||
if (instrumentEntry.connector !== connector) {
|
||||
logger.warn({ instrument: canonical.instrument_id, expected: instrumentEntry.connector, actual: connector }, 'connector mismatch for instrument');
|
||||
await queue.markRawParsed(rawId, 'dropped', 'connector mismatch');
|
||||
return { dropped: true };
|
||||
}
|
||||
const dedupe = dedupeKey(canonical);
|
||||
canonical.meta = { ...(canonical.meta || {}), instrument_config: instrumentEntry.config };
|
||||
const inserted = await queue.insertOutbox(canonical, dedupe);
|
||||
await queue.markRawParsed(rawId, 'processed');
|
||||
if (inserted && inserted.duplicate) {
|
||||
logger.info({ dedupe, connector }, 'duplicate payload detected');
|
||||
}
|
||||
return { canonical, inserted, dedupe };
|
||||
} catch (error) {
|
||||
logger.error({ err: error.message, connector }, 'message processing failed');
|
||||
if (rawId) {
|
||||
await queue.markRawParsed(rawId, 'failed', error.message);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { processMessage };
|
||||
151
middleware/src/queue/sqliteQueue.js
Normal file
151
middleware/src/queue/sqliteQueue.js
Normal file
@ -0,0 +1,151 @@
|
||||
const crypto = require('crypto');
|
||||
const DatabaseClient = require('../storage/db');
|
||||
const config = require('../../config/default');
|
||||
|
||||
class SqliteQueue {
|
||||
constructor() {
|
||||
this.db = new DatabaseClient(config.db);
|
||||
}
|
||||
|
||||
_serial(payload) {
|
||||
return typeof payload === 'string' ? payload : JSON.stringify(payload);
|
||||
}
|
||||
|
||||
async insertRaw(connector, payload) {
|
||||
const content = this._serial(payload);
|
||||
return this.db.run(
|
||||
`INSERT INTO inbox_raw (connector, payload) VALUES (?, ?)`,
|
||||
[connector, content]
|
||||
);
|
||||
}
|
||||
|
||||
async markRawParsed(id, status, error) {
|
||||
await this.db.run(
|
||||
`UPDATE inbox_raw SET status = ?, parse_status = ?, error = ? WHERE id = ?`,
|
||||
[status, error ? 'error' : 'ok', error, id]
|
||||
);
|
||||
}
|
||||
|
||||
async insertOutbox(canonical, dedupeKey) {
|
||||
const payload = this._serial(canonical);
|
||||
try {
|
||||
return await this.db.run(
|
||||
`INSERT INTO outbox_result (canonical_payload, status, dedupe_key, next_attempt_at) VALUES (?, 'pending', ?, 0)`,
|
||||
[payload, dedupeKey]
|
||||
);
|
||||
} catch (err) {
|
||||
if (err.message && err.message.includes('UNIQUE constraint failed')) {
|
||||
const existing = await this.db.get(`SELECT * FROM outbox_result WHERE dedupe_key = ?`, [dedupeKey]);
|
||||
return { existing, duplicate: true };
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async fetchPending(batchSize) {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
return this.db.all(
|
||||
`SELECT * FROM outbox_result WHERE status IN ('pending','retrying') AND next_attempt_at <= ? ORDER BY next_attempt_at ASC LIMIT ?`,
|
||||
[now, batchSize]
|
||||
);
|
||||
}
|
||||
|
||||
async claimPending(batchSize, workerId) {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const candidates = await this.db.all(
|
||||
`SELECT id FROM outbox_result WHERE status IN ('pending','retrying') AND next_attempt_at <= ? ORDER BY next_attempt_at ASC LIMIT ?`,
|
||||
[now, batchSize * 2]
|
||||
);
|
||||
const locked = [];
|
||||
const stale = now - (config.worker.lockTTLSeconds || 60);
|
||||
for (const candidate of candidates) {
|
||||
const result = await this.db.run(
|
||||
`UPDATE outbox_result SET locked_at = ?, locked_by = ? WHERE id = ? AND (locked_at IS NULL OR locked_at <= ?)`,
|
||||
[now, workerId, candidate.id, stale]
|
||||
);
|
||||
if (result.changes === 1) {
|
||||
const entry = await this.db.get(`SELECT * FROM outbox_result WHERE id = ?`, [candidate.id]);
|
||||
locked.push(entry);
|
||||
if (locked.length >= batchSize) break;
|
||||
}
|
||||
}
|
||||
return locked;
|
||||
}
|
||||
|
||||
async markOutboxStatus(id, status, { lastError = null, attempts = null, nextAttemptAt = null } = {}) {
|
||||
const fields = ['status = ?'];
|
||||
const params = [status];
|
||||
if (lastError !== null) {
|
||||
fields.push('last_error = ?');
|
||||
params.push(lastError);
|
||||
}
|
||||
if (attempts !== null) {
|
||||
fields.push('attempts = ?');
|
||||
params.push(attempts);
|
||||
}
|
||||
if (nextAttemptAt !== null) {
|
||||
fields.push('next_attempt_at = ?');
|
||||
params.push(nextAttemptAt);
|
||||
}
|
||||
fields.push('locked_at = NULL', 'locked_by = NULL');
|
||||
params.push(id);
|
||||
await this.db.run(`UPDATE outbox_result SET ${fields.join(', ')} WHERE id = ?`, params);
|
||||
}
|
||||
|
||||
async recordDeliveryAttempt({ outboxId, attempt, status, responseCode, responseBody, latency }) {
|
||||
await this.db.run(
|
||||
`INSERT INTO delivery_log (outbox_id, attempt, status, response_code, response_body, latency_ms) VALUES (?, ?, ?, ?, ?, ?)` ,
|
||||
[outboxId, attempt, status, responseCode, responseBody, latency]
|
||||
);
|
||||
}
|
||||
|
||||
async moveToDeadLetter(payload, reason) {
|
||||
await this.db.run(
|
||||
`INSERT INTO dead_letter (payload, reason) VALUES (?, ?)` ,
|
||||
[this._serial(payload), reason]
|
||||
);
|
||||
}
|
||||
|
||||
async pendingCount() {
|
||||
const row = await this.db.get(`SELECT COUNT(*) as count FROM outbox_result WHERE status = 'pending'`);
|
||||
return row ? row.count : 0;
|
||||
}
|
||||
|
||||
async retryingCount() {
|
||||
const row = await this.db.get(`SELECT COUNT(*) as count FROM outbox_result WHERE status = 'retrying'`);
|
||||
return row ? row.count : 0;
|
||||
}
|
||||
|
||||
async deadLetterCount() {
|
||||
const row = await this.db.get(`SELECT COUNT(*) as count FROM dead_letter`);
|
||||
return row ? row.count : 0;
|
||||
}
|
||||
|
||||
async getLastSuccessTimestamp() {
|
||||
const row = await this.db.get(
|
||||
`SELECT MAX(created_at) as last_success FROM delivery_log WHERE status = 'success'`
|
||||
);
|
||||
return row?.last_success || null;
|
||||
}
|
||||
|
||||
async getAverageLatency() {
|
||||
const row = await this.db.get(`SELECT AVG(latency_ms) as avg_latency FROM delivery_log WHERE latency_ms IS NOT NULL`);
|
||||
return row?.avg_latency || 0;
|
||||
}
|
||||
|
||||
async getDeliveryAttempts() {
|
||||
const row = await this.db.get(`SELECT COUNT(*) as total FROM delivery_log`);
|
||||
return row ? row.total : 0;
|
||||
}
|
||||
|
||||
async ping() {
|
||||
await this.db.get('SELECT 1 as ok');
|
||||
return true;
|
||||
}
|
||||
|
||||
async close() {
|
||||
await this.db.close();
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new SqliteQueue();
|
||||
35
middleware/src/routes/health.js
Normal file
35
middleware/src/routes/health.js
Normal file
@ -0,0 +1,35 @@
|
||||
const express = require('express');
|
||||
const queue = require('../queue/sqliteQueue');
|
||||
|
||||
function createHealthRouter(connectors = []) {
|
||||
const router = express.Router();
|
||||
|
||||
router.get('/', async (req, res) => {
|
||||
const connectorStatuses = connectors.map((connector) => connector.health());
|
||||
const pending = await queue.pendingCount();
|
||||
const retrying = await queue.retryingCount();
|
||||
const deadLetters = await queue.deadLetterCount();
|
||||
res.json({
|
||||
status: 'ok',
|
||||
connectors: connectorStatuses,
|
||||
metrics: {
|
||||
pending,
|
||||
retrying,
|
||||
deadLetters
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
router.get('/ready', async (req, res) => {
|
||||
try {
|
||||
await queue.ping();
|
||||
res.json({ status: 'ready' });
|
||||
} catch (err) {
|
||||
res.status(503).json({ status: 'unready', reason: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
return router;
|
||||
}
|
||||
|
||||
module.exports = { createHealthRouter };
|
||||
36
middleware/src/routes/instrumentConfig.js
Normal file
36
middleware/src/routes/instrumentConfig.js
Normal file
@ -0,0 +1,36 @@
|
||||
const express = require('express');
|
||||
const { z } = require('zod');
|
||||
const service = require('../instrumentConfig/service');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
const schema = z.object({
|
||||
instrument_id: z.string().min(1),
|
||||
connector: z.enum(['http-json', 'hl7-tcp', 'astm-serial']),
|
||||
enabled: z.boolean().optional().default(true),
|
||||
config: z.record(z.any()).optional()
|
||||
});
|
||||
|
||||
router.get('/', async (req, res) => {
|
||||
res.json(service.list());
|
||||
});
|
||||
|
||||
router.get('/:id', async (req, res) => {
|
||||
const entry = service.get(req.params.id);
|
||||
if (!entry) {
|
||||
return res.status(404).json({ error: 'not found' });
|
||||
}
|
||||
res.json(entry);
|
||||
});
|
||||
|
||||
router.post('/', express.json(), async (req, res) => {
|
||||
try {
|
||||
const payload = schema.parse(req.body);
|
||||
const saved = await service.upsert(payload);
|
||||
res.status(201).json(saved);
|
||||
} catch (err) {
|
||||
res.status(400).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = { router };
|
||||
40
middleware/src/routes/metrics.js
Normal file
40
middleware/src/routes/metrics.js
Normal file
@ -0,0 +1,40 @@
|
||||
const express = require('express');
|
||||
const queue = require('../queue/sqliteQueue');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
function formatMetric(name, value, type = 'gauge', help = '') {
|
||||
const lines = [];
|
||||
if (help) {
|
||||
lines.push(`# HELP ${name} ${help}`);
|
||||
}
|
||||
lines.push(`# TYPE ${name} ${type}`);
|
||||
lines.push(`${name} ${value}`);
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const pending = await queue.pendingCount();
|
||||
const retrying = await queue.retryingCount();
|
||||
const deadLetters = await queue.deadLetterCount();
|
||||
const lastSuccess = await queue.getLastSuccessTimestamp();
|
||||
const avgLatency = await queue.getAverageLatency();
|
||||
const attempts = await queue.getDeliveryAttempts();
|
||||
const timestamp = lastSuccess ? new Date(lastSuccess).getTime() / 1000 : 0;
|
||||
const metrics = [
|
||||
formatMetric('workstation_pending_total', pending, 'gauge', 'Number of pending payloads'),
|
||||
formatMetric('workstation_retrying_total', retrying, 'gauge', 'Number of payloads currently retrying'),
|
||||
formatMetric('workstation_dead_letters_total', deadLetters, 'gauge', 'Total dead-lettered payloads'),
|
||||
formatMetric('workstation_delivery_attempts_total', attempts, 'counter', 'Total delivery attempts logged'),
|
||||
formatMetric('workstation_last_success_timestamp', timestamp, 'gauge', 'Epoch seconds of last successful delivery'),
|
||||
formatMetric('workstation_avg_latency_ms', Math.round(avgLatency), 'gauge', 'Average delivery latency in milliseconds')
|
||||
];
|
||||
res.set('content-type', 'text/plain; version=0.0.4; charset=utf-8');
|
||||
res.send(metrics.join('\n'));
|
||||
} catch (error) {
|
||||
res.status(500).send('metrics unavailable');
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
75
middleware/src/scripts/maintenance.js
Normal file
75
middleware/src/scripts/maintenance.js
Normal file
@ -0,0 +1,75 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const sqlite3 = require('sqlite3');
|
||||
const config = require('../../config/default');
|
||||
|
||||
const dbPath = config.db.path;
|
||||
|
||||
function backup() {
|
||||
const target = `${dbPath}.bak-${Date.now()}`;
|
||||
fs.copyFileSync(dbPath, target);
|
||||
console.log(`backup created ${target}`);
|
||||
}
|
||||
|
||||
function vacuum() {
|
||||
const db = new sqlite3.Database(dbPath);
|
||||
db.run('VACUUM', (err) => {
|
||||
if (err) {
|
||||
console.error('vacuum failed', err.message);
|
||||
process.exit(1);
|
||||
}
|
||||
console.log('vacuum completed');
|
||||
db.close();
|
||||
});
|
||||
}
|
||||
|
||||
function prune(days = 30) {
|
||||
const threshold = Date.now() - days * 24 * 60 * 60 * 1000;
|
||||
const timestamp = new Date(threshold).toISOString();
|
||||
const db = new sqlite3.Database(dbPath);
|
||||
db.run(
|
||||
`DELETE FROM delivery_log WHERE created_at < ?`,
|
||||
[timestamp],
|
||||
function (err) {
|
||||
if (err) {
|
||||
console.error('prune failed', err.message);
|
||||
process.exit(1);
|
||||
}
|
||||
console.log(`pruned ${this.changes} delivery_log rows older than ${timestamp}`);
|
||||
db.close();
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
function usage() {
|
||||
console.log('usage: node maintenance.js <backup|vacuum|prune> [--days=<number>]');
|
||||
}
|
||||
|
||||
const args = process.argv.slice(2);
|
||||
if (!args.length) {
|
||||
usage();
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const command = args[0];
|
||||
const opts = {};
|
||||
for (const arg of args.slice(1)) {
|
||||
if (arg.startsWith('--days=')) {
|
||||
opts.days = Number(arg.replace('--days=', ''));
|
||||
}
|
||||
}
|
||||
|
||||
switch (command) {
|
||||
case 'backup':
|
||||
backup();
|
||||
break;
|
||||
case 'vacuum':
|
||||
vacuum();
|
||||
break;
|
||||
case 'prune':
|
||||
prune(opts.days || 30);
|
||||
break;
|
||||
default:
|
||||
usage();
|
||||
process.exit(1);
|
||||
}
|
||||
60
middleware/src/storage/db.js
Normal file
60
middleware/src/storage/db.js
Normal file
@ -0,0 +1,60 @@
|
||||
const sqlite3 = require('sqlite3');
|
||||
const { promisify } = require('util');
|
||||
|
||||
class DatabaseClient {
|
||||
constructor({ path, busyTimeout = 5000 }) {
|
||||
this.db = new sqlite3.Database(path, (err) => {
|
||||
if (err) {
|
||||
console.error('unable to open sqlite file', err);
|
||||
}
|
||||
});
|
||||
this.db.configure('busyTimeout', busyTimeout);
|
||||
}
|
||||
|
||||
run(sql, params = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.db.run(sql, params, function (err) {
|
||||
if (err) return reject(err);
|
||||
resolve({ lastID: this.lastID, changes: this.changes });
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
get(sql, params = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.db.get(sql, params, (err, row) => {
|
||||
if (err) return reject(err);
|
||||
resolve(row);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
all(sql, params = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.db.all(sql, params, (err, rows) => {
|
||||
if (err) return reject(err);
|
||||
resolve(rows);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
exec(sql) {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.db.exec(sql, (err) => {
|
||||
if (err) return reject(err);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
close() {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.db.close((err) => {
|
||||
if (err) return reject(err);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = DatabaseClient;
|
||||
31
middleware/src/storage/instrumentConfigStore.js
Normal file
31
middleware/src/storage/instrumentConfigStore.js
Normal file
@ -0,0 +1,31 @@
|
||||
const DatabaseClient = require('./db');
|
||||
const config = require('../../config/default');
|
||||
|
||||
class InstrumentConfigStore {
|
||||
constructor() {
|
||||
this.db = new DatabaseClient(config.db);
|
||||
}
|
||||
|
||||
async list() {
|
||||
return this.db.all(`SELECT instrument_id, connector, enabled, config FROM instrument_config`);
|
||||
}
|
||||
|
||||
async get(instrumentId) {
|
||||
return this.db.get(
|
||||
`SELECT instrument_id, connector, enabled, config FROM instrument_config WHERE instrument_id = ?`,
|
||||
[instrumentId]
|
||||
);
|
||||
}
|
||||
|
||||
async upsert({ instrument_id, connector, enabled = 1, config: cfg = {} }) {
|
||||
const payload = JSON.stringify(cfg);
|
||||
await this.db.run(
|
||||
`INSERT INTO instrument_config (instrument_id, connector, enabled, config) VALUES (?, ?, ?, ?)
|
||||
ON CONFLICT(instrument_id) DO UPDATE SET connector = excluded.connector, enabled = excluded.enabled, config = excluded.config`,
|
||||
[instrument_id, connector, Number(enabled), payload]
|
||||
);
|
||||
return this.get(instrument_id);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new InstrumentConfigStore();
|
||||
26
middleware/src/storage/migrate.js
Normal file
26
middleware/src/storage/migrate.js
Normal file
@ -0,0 +1,26 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const DatabaseClient = require('./db');
|
||||
const config = require('../../config/default');
|
||||
|
||||
async function migrate() {
|
||||
const db = new DatabaseClient(config.db);
|
||||
const migrationsDir = path.join(__dirname, '..', '..', 'db', 'migrations');
|
||||
const files = fs.readdirSync(migrationsDir).filter((name) => name.endsWith('.sql'));
|
||||
for (const file of files) {
|
||||
const payload = fs.readFileSync(path.join(migrationsDir, file), 'utf8');
|
||||
await db.exec(payload);
|
||||
}
|
||||
await db.close();
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
migrate()
|
||||
.then(() => console.log('migrations applied'))
|
||||
.catch((err) => {
|
||||
console.error('migration failed', err);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
module.exports = migrate;
|
||||
12
middleware/src/utils/hash.js
Normal file
12
middleware/src/utils/hash.js
Normal file
@ -0,0 +1,12 @@
|
||||
const crypto = require('crypto');
|
||||
|
||||
function dedupeKey({ instrument_id, sample_id, result_time, results }) {
|
||||
const testCodes = (results || [])
|
||||
.map((r) => `${r.test_code}:${r.value}`)
|
||||
.sort()
|
||||
.join('|');
|
||||
const payload = `${instrument_id}|${sample_id}|${result_time}|${testCodes}`;
|
||||
return crypto.createHash('sha256').update(payload).digest('hex');
|
||||
}
|
||||
|
||||
module.exports = { dedupeKey };
|
||||
9
middleware/src/utils/logger.js
Normal file
9
middleware/src/utils/logger.js
Normal file
@ -0,0 +1,9 @@
|
||||
const pino = require('pino');
|
||||
const config = require('../../config/default');
|
||||
|
||||
const logger = pino({
|
||||
level: process.env.LOG_LEVEL || 'info',
|
||||
base: { service: 'workstation-middleware', env: config.env }
|
||||
});
|
||||
|
||||
module.exports = logger;
|
||||
33
middleware/test/parsers.test.js
Normal file
33
middleware/test/parsers.test.js
Normal file
@ -0,0 +1,33 @@
|
||||
const assert = require('node:assert');
|
||||
const { parse: parseHl7 } = require('../src/parsers/hl7Parser');
|
||||
const { parse: parseAstm } = require('../src/parsers/astmParser');
|
||||
const { normalize } = require('../src/normalizers');
|
||||
|
||||
async function run() {
|
||||
const hl7Sample = String.raw`MSH|^~\&|LIS|LAB|CLQMS|NORTH|202603261000||ORU^R01|msg-123|P|2.5
|
||||
OBR|1|SOMEID|SMP-001||CBC^^^WBC^H|||202603261000
|
||||
OBX|1|NM|WBC^White Blood Cell Count||8.2|10^3/uL|N|||F
|
||||
OBX|2|NM|RBC^Red Blood Cell Count||4.5|10^6/uL|N|||F`;
|
||||
const astmSample = String.raw`H|\^&|LAB||ASTM1
|
||||
P|1
|
||||
O|1|SMP-100|12345^Instrument|^^^GLU^^^||
|
||||
R|1|^^^GLU^7.2|7.2|mg/dL|||||N|
|
||||
R|2|^^^ALT^7.4|50|IU/L|||||N|`;
|
||||
|
||||
const hl7Result = await parseHl7(hl7Sample);
|
||||
assert.strictEqual(hl7Result.results.length, 2, 'HL7 should parse two OBX segments');
|
||||
const hl7Normalized = normalize(hl7Result);
|
||||
assert.strictEqual(hl7Normalized.instrument_id, 'SOMEID');
|
||||
assert.strictEqual(hl7Normalized.results[0].test_code, 'WBC');
|
||||
|
||||
const astmResult = parseAstm(astmSample);
|
||||
assert.strictEqual(astmResult.results.length, 2, 'ASTM should parse multi R segments');
|
||||
const astmNormalized = normalize(astmResult);
|
||||
assert.strictEqual(astmNormalized.results[0].test_code, 'GLU');
|
||||
console.log('Parser smoke tests passed');
|
||||
}
|
||||
|
||||
run().catch((err) => {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
16
node_modules/.bin/node-gyp
generated
vendored
Normal file
16
node_modules/.bin/node-gyp
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../node-gyp/bin/node-gyp.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../node-gyp/bin/node-gyp.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/node-gyp.cmd
generated
vendored
Normal file
17
node_modules/.bin/node-gyp.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\node-gyp\bin\node-gyp.js" %*
|
||||
28
node_modules/.bin/node-gyp.ps1
generated
vendored
Normal file
28
node_modules/.bin/node-gyp.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../node-gyp/bin/node-gyp.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../node-gyp/bin/node-gyp.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../node-gyp/bin/node-gyp.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../node-gyp/bin/node-gyp.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/node-which
generated
vendored
Normal file
16
node_modules/.bin/node-which
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../which/bin/which.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../which/bin/which.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/node-which.cmd
generated
vendored
Normal file
17
node_modules/.bin/node-which.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\which\bin\which.js" %*
|
||||
28
node_modules/.bin/node-which.ps1
generated
vendored
Normal file
28
node_modules/.bin/node-which.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../which/bin/which.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../which/bin/which.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../which/bin/which.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../which/bin/which.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/nopt
generated
vendored
Normal file
16
node_modules/.bin/nopt
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../nopt/bin/nopt.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../nopt/bin/nopt.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/nopt.cmd
generated
vendored
Normal file
17
node_modules/.bin/nopt.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\nopt\bin\nopt.js" %*
|
||||
28
node_modules/.bin/nopt.ps1
generated
vendored
Normal file
28
node_modules/.bin/nopt.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../nopt/bin/nopt.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../nopt/bin/nopt.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../nopt/bin/nopt.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../nopt/bin/nopt.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/pino
generated
vendored
Normal file
16
node_modules/.bin/pino
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../pino/bin.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../pino/bin.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/pino.cmd
generated
vendored
Normal file
17
node_modules/.bin/pino.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\pino\bin.js" %*
|
||||
28
node_modules/.bin/pino.ps1
generated
vendored
Normal file
28
node_modules/.bin/pino.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../pino/bin.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../pino/bin.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../pino/bin.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../pino/bin.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/prebuild-install
generated
vendored
Normal file
16
node_modules/.bin/prebuild-install
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../prebuild-install/bin.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../prebuild-install/bin.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/prebuild-install.cmd
generated
vendored
Normal file
17
node_modules/.bin/prebuild-install.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\prebuild-install\bin.js" %*
|
||||
28
node_modules/.bin/prebuild-install.ps1
generated
vendored
Normal file
28
node_modules/.bin/prebuild-install.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../prebuild-install/bin.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../prebuild-install/bin.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../prebuild-install/bin.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../prebuild-install/bin.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/rc
generated
vendored
Normal file
16
node_modules/.bin/rc
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../rc/cli.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../rc/cli.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/rc.cmd
generated
vendored
Normal file
17
node_modules/.bin/rc.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\rc\cli.js" %*
|
||||
28
node_modules/.bin/rc.ps1
generated
vendored
Normal file
28
node_modules/.bin/rc.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../rc/cli.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../rc/cli.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../rc/cli.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../rc/cli.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
16
node_modules/.bin/semver
generated
vendored
Normal file
16
node_modules/.bin/semver
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
|
||||
|
||||
case `uname` in
|
||||
*CYGWIN*|*MINGW*|*MSYS*)
|
||||
if command -v cygpath > /dev/null 2>&1; then
|
||||
basedir=`cygpath -w "$basedir"`
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -x "$basedir/node" ]; then
|
||||
exec "$basedir/node" "$basedir/../semver/bin/semver.js" "$@"
|
||||
else
|
||||
exec node "$basedir/../semver/bin/semver.js" "$@"
|
||||
fi
|
||||
17
node_modules/.bin/semver.cmd
generated
vendored
Normal file
17
node_modules/.bin/semver.cmd
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
@ECHO off
|
||||
GOTO start
|
||||
:find_dp0
|
||||
SET dp0=%~dp0
|
||||
EXIT /b
|
||||
:start
|
||||
SETLOCAL
|
||||
CALL :find_dp0
|
||||
|
||||
IF EXIST "%dp0%\node.exe" (
|
||||
SET "_prog=%dp0%\node.exe"
|
||||
) ELSE (
|
||||
SET "_prog=node"
|
||||
SET PATHEXT=%PATHEXT:;.JS;=;%
|
||||
)
|
||||
|
||||
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\semver\bin\semver.js" %*
|
||||
28
node_modules/.bin/semver.ps1
generated
vendored
Normal file
28
node_modules/.bin/semver.ps1
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env pwsh
|
||||
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
|
||||
|
||||
$exe=""
|
||||
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
|
||||
# Fix case when both the Windows and Linux builds of Node
|
||||
# are installed in the same directory
|
||||
$exe=".exe"
|
||||
}
|
||||
$ret=0
|
||||
if (Test-Path "$basedir/node$exe") {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "$basedir/node$exe" "$basedir/../semver/bin/semver.js" $args
|
||||
} else {
|
||||
& "$basedir/node$exe" "$basedir/../semver/bin/semver.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
} else {
|
||||
# Support pipeline input
|
||||
if ($MyInvocation.ExpectingInput) {
|
||||
$input | & "node$exe" "$basedir/../semver/bin/semver.js" $args
|
||||
} else {
|
||||
& "node$exe" "$basedir/../semver/bin/semver.js" $args
|
||||
}
|
||||
$ret=$LASTEXITCODE
|
||||
}
|
||||
exit $ret
|
||||
2052
node_modules/.package-lock.json
generated
vendored
Normal file
2052
node_modules/.package-lock.json
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
20
node_modules/@gar/promise-retry/LICENSE
generated
vendored
Normal file
20
node_modules/@gar/promise-retry/LICENSE
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
Copyright (c) 2011 Tim Koschützki (tim@debuggable.com), Felix Geisendörfer (felix@debuggable.com)
|
||||
Copyright (c) 2014 IndigoUnited
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is furnished
|
||||
to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
86
node_modules/@gar/promise-retry/README.md
generated
vendored
Normal file
86
node_modules/@gar/promise-retry/README.md
generated
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
# @gar/promise-retry
|
||||
|
||||
This is a fork of [promise-retry](https://npm.im/promise-retry). See the [CHANGELOG.md](./CHANGELOG.md) for more info.
|
||||
It also inlines and updates the original [retry](https://github.com/tim-kos/node-retry) package that was being promisified.
|
||||
|
||||
Retries a function that returns a promise, leveraging the power of the [retry](https://github.com/tim-kos/node-retry) module to the promises world.
|
||||
|
||||
There's already some modules that are able to retry functions that return promises but they were rather difficult to use or do not offer an easy way to do conditional retries.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
`$ npm install promise-retry`
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
### retry(fn(retry, number, operation), [options])
|
||||
|
||||
Calls `fn` until the returned promise ends up fulfilled or rejected with an error different than a `retry` error.
|
||||
The `options` argument is an object which maps to the original [retry](https://github.com/tim-kos/node-retry) module options:
|
||||
|
||||
- `retries`: The maximum amount of times to retry the operation. Default is `10`.
|
||||
- `factor`: The exponential factor to use. Default is `2`.
|
||||
- `minTimeout`: The number of milliseconds before starting the first retry. Default is `1000`.
|
||||
- `maxTimeout`: The maximum number of milliseconds between two retries. Default is `Infinity`.
|
||||
- `randomize`: Randomizes the timeouts by multiplying with a factor between `1` to `2`. Default is `false`.
|
||||
- `forever`: Whether to retry forver, default is false.
|
||||
- `unref`: Whether to [unref](https://nodejs.org/api/timers.html#timers_unref) the underlying `setTimeout`s.
|
||||
- `maxRetryTime`: Maximum number of milliseconds that the retried operation is allowed to run.
|
||||
|
||||
`options` can also be a Number, which is effectively the same as pasing `{ retries: Number }`
|
||||
|
||||
|
||||
The `fn` function will be called with the following parameters:
|
||||
- A `retry` function as its first argument that should be called with an error whenever you want to retry `fn`. The `retry` function will always throw an error.
|
||||
- The current retry number being attempted
|
||||
- The retry operation object itself from which will allow you to call things like `operation.reset()`
|
||||
|
||||
If there are retries left, it will throw a special `retry` error that will be handled internally to call `fn` again.
|
||||
If there are no retries left, it will throw the actual error passed to it.
|
||||
|
||||
## Example
|
||||
```js
|
||||
const { retry } = require('@gar/promise-retry');
|
||||
|
||||
// Simple example
|
||||
retry(function (retry, number) {
|
||||
console.log('attempt number', number);
|
||||
|
||||
return doSomething()
|
||||
.catch(retry);
|
||||
})
|
||||
.then(function (value) {
|
||||
// ..
|
||||
}, function (err) {
|
||||
// ..
|
||||
});
|
||||
|
||||
// Conditional example
|
||||
retry(function (retry, number) {
|
||||
console.log('attempt number', number);
|
||||
|
||||
return doSomething()
|
||||
.catch(function (err) {
|
||||
if (err.code === 'ETIMEDOUT') {
|
||||
retry(err);
|
||||
}
|
||||
|
||||
throw err;
|
||||
});
|
||||
})
|
||||
.then(function (value) {
|
||||
// ..
|
||||
}, function (err) {
|
||||
// ..
|
||||
});
|
||||
```
|
||||
|
||||
## Tests
|
||||
|
||||
`$ npm test`
|
||||
|
||||
## License
|
||||
|
||||
Released under the [MIT License](http://www.opensource.org/licenses/mit-license.php).
|
||||
114
node_modules/@gar/promise-retry/lib/index.d.ts
generated
vendored
Normal file
114
node_modules/@gar/promise-retry/lib/index.d.ts
generated
vendored
Normal file
@ -0,0 +1,114 @@
|
||||
type OperationOptions = {
|
||||
/**
|
||||
* The exponential factor to use.
|
||||
* @default 2
|
||||
*/
|
||||
factor?: number | undefined;
|
||||
/**
|
||||
* The number of milliseconds before starting the first retry.
|
||||
* @default 1000
|
||||
*/
|
||||
minTimeout?: number | undefined;
|
||||
/**
|
||||
* The maximum number of milliseconds between two retries.
|
||||
* @default Infinity
|
||||
*/
|
||||
maxTimeout?: number | undefined;
|
||||
/**
|
||||
* Randomizes the timeouts by multiplying a factor between 1-2.
|
||||
* @default false
|
||||
*/
|
||||
randomize?: boolean | undefined;
|
||||
/**
|
||||
* The maximum amount of times to retry the operation.
|
||||
* @default 10
|
||||
*/
|
||||
retries?: number | undefined;
|
||||
/**
|
||||
* Whether to retry forever.
|
||||
* @default false
|
||||
*/
|
||||
forever?: boolean | undefined;
|
||||
/**
|
||||
* Whether to [unref](https://nodejs.org/api/timers.html#timers_unref) the setTimeout's.
|
||||
* @default false
|
||||
*/
|
||||
unref?: boolean | undefined;
|
||||
/**
|
||||
* The maximum time (in milliseconds) that the retried operation is allowed to run.
|
||||
* @default Infinity
|
||||
*/
|
||||
maxRetryTime?: number | undefined;
|
||||
} | number[];
|
||||
|
||||
type RetryOperation = {
|
||||
/**
|
||||
* Returns an array of all errors that have been passed to `retryOperation.retry()` so far.
|
||||
* The returning array has the errors ordered chronologically based on when they were passed to
|
||||
* `retryOperation.retry()`, which means the first passed error is at index zero and the last is at the last index.
|
||||
*/
|
||||
errors(): Error[];
|
||||
|
||||
/**
|
||||
* A reference to the error object that occured most frequently.
|
||||
* Errors are compared using the `error.message` property.
|
||||
* If multiple error messages occured the same amount of time, the last error object with that message is returned.
|
||||
*
|
||||
* @return If no errors occured so far the value will be `null`.
|
||||
*/
|
||||
mainError(): Error | null;
|
||||
|
||||
/**
|
||||
* Defines the function that is to be retried and executes it for the first time right away.
|
||||
*
|
||||
* @param fn The function that is to be retried. `currentAttempt` represents the number of attempts callback has been executed so far.
|
||||
*/
|
||||
attempt(fn: (currentAttempt: number) => void): void;
|
||||
|
||||
/**
|
||||
* Returns `false` when no `error` value is given, or the maximum amount of retries has been reached.
|
||||
* Otherwise it returns `true`, and retries the operation after the timeout for the current attempt number.
|
||||
*/
|
||||
retry(err?: Error): boolean;
|
||||
|
||||
/**
|
||||
* Stops the operation being retried. Useful for aborting the operation on a fatal error etc.
|
||||
*/
|
||||
stop(): void;
|
||||
|
||||
/**
|
||||
* Resets the internal state of the operation object, so that you can call `attempt()` again as if
|
||||
* this was a new operation object.
|
||||
*/
|
||||
reset(): void;
|
||||
|
||||
/**
|
||||
* Returns an int representing the number of attempts it took to call `fn` before it was successful.
|
||||
*/
|
||||
attempts(): number;
|
||||
}
|
||||
|
||||
/**
|
||||
* A function that is retryable, by having implicitly-bound params for both an error handler and an attempt number.
|
||||
*
|
||||
* @param retry The retry callback upon any rejection. Essentially throws the error on in the form of a { retried: err }
|
||||
* wrapper, and tags it with a 'code' field of value "EPROMISERETRY" so that it is recognised as needing retrying. Call
|
||||
* this from the catch() block when you want to retry a rejected attempt.
|
||||
* @param attempt The number of the attempt.
|
||||
* @param operation The operation object from the underlying retry module.
|
||||
* @returns A Promise for anything (eg. a HTTP response).
|
||||
*/
|
||||
type RetryableFn<ResolutionType> = (retry: (error: any) => never, attempt: number, operation: RetryOperation) => Promise<ResolutionType>;
|
||||
/**
|
||||
* Wrap all functions of the object with retry. The params can be entered in either order, just like in the original library.
|
||||
*
|
||||
* @param retryableFn The function to retry.
|
||||
* @param options The options for how long/often to retry the function for.
|
||||
* @returns The Promise resolved by the input retryableFn, or rejected (if not retried) from its catch block.
|
||||
*/
|
||||
declare function promiseRetry<ResolutionType>(
|
||||
retryableFn: RetryableFn<ResolutionType>,
|
||||
options?: OperationOptions,
|
||||
): Promise<ResolutionType>;
|
||||
|
||||
export { promiseRetry };
|
||||
62
node_modules/@gar/promise-retry/lib/index.js
generated
vendored
Normal file
62
node_modules/@gar/promise-retry/lib/index.js
generated
vendored
Normal file
@ -0,0 +1,62 @@
|
||||
const { RetryOperation } = require('./retry')
|
||||
|
||||
const createTimeout = (attempt, opts) => Math.min(Math.round((1 + (opts.randomize ? Math.random() : 0)) * Math.max(opts.minTimeout, 1) * Math.pow(opts.factor, attempt)), opts.maxTimeout)
|
||||
const isRetryError = err => err?.code === 'EPROMISERETRY' && Object.hasOwn(err, 'retried')
|
||||
|
||||
const promiseRetry = async (fn, options = {}) => {
|
||||
let timeouts = []
|
||||
if (options instanceof Array) {
|
||||
timeouts = [...options]
|
||||
} else {
|
||||
if (options.retries === Infinity) {
|
||||
options.forever = true
|
||||
delete options.retries
|
||||
}
|
||||
const opts = {
|
||||
retries: 10,
|
||||
factor: 2,
|
||||
minTimeout: 1 * 1000,
|
||||
maxTimeout: Infinity,
|
||||
randomize: false,
|
||||
...options
|
||||
}
|
||||
if (opts.minTimeout > opts.maxTimeout) {
|
||||
throw new Error('minTimeout is greater than maxTimeout')
|
||||
}
|
||||
if (opts.retries) {
|
||||
for (let i = 0; i < opts.retries; i++) {
|
||||
timeouts.push(createTimeout(i, opts))
|
||||
}
|
||||
// sort the array numerically ascending (since the timeouts may be out of order at factor < 1)
|
||||
timeouts.sort((a, b) => a - b)
|
||||
} else if (options.forever) {
|
||||
timeouts.push(createTimeout(0, opts))
|
||||
}
|
||||
}
|
||||
|
||||
const operation = new RetryOperation(timeouts, {
|
||||
forever: options.forever,
|
||||
unref: options.unref,
|
||||
maxRetryTime: options.maxRetryTime
|
||||
})
|
||||
|
||||
return new Promise(function (resolve, reject) {
|
||||
operation.attempt(async number => {
|
||||
try {
|
||||
const result = await fn(err => {
|
||||
throw Object.assign(new Error('Retrying'), { code: 'EPROMISERETRY', retried: err })
|
||||
}, number, operation)
|
||||
return resolve(result)
|
||||
} catch (err) {
|
||||
if (!isRetryError(err)) {
|
||||
return reject(err)
|
||||
}
|
||||
if (!operation.retry(err.retried || new Error())) {
|
||||
return reject(err.retried)
|
||||
}
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
module.exports = { promiseRetry }
|
||||
109
node_modules/@gar/promise-retry/lib/retry.js
generated
vendored
Normal file
109
node_modules/@gar/promise-retry/lib/retry.js
generated
vendored
Normal file
@ -0,0 +1,109 @@
|
||||
class RetryOperation {
|
||||
#attempts = 1
|
||||
#cachedTimeouts = null
|
||||
#errors = []
|
||||
#fn = null
|
||||
#maxRetryTime
|
||||
#operationStart = null
|
||||
#originalTimeouts
|
||||
#timeouts
|
||||
#timer = null
|
||||
#unref
|
||||
|
||||
constructor (timeouts, options = {}) {
|
||||
this.#originalTimeouts = [...timeouts]
|
||||
this.#timeouts = [...timeouts]
|
||||
this.#unref = options.unref
|
||||
this.#maxRetryTime = options.maxRetryTime || Infinity
|
||||
if (options.forever) {
|
||||
this.#cachedTimeouts = [...this.#timeouts]
|
||||
}
|
||||
}
|
||||
|
||||
get timeouts () {
|
||||
return [...this.#timeouts]
|
||||
}
|
||||
|
||||
get errors () {
|
||||
return [...this.#errors]
|
||||
}
|
||||
|
||||
get attempts () {
|
||||
return this.#attempts
|
||||
}
|
||||
|
||||
get mainError () {
|
||||
let mainError = null
|
||||
if (this.#errors.length) {
|
||||
let mainErrorCount = 0
|
||||
const counts = {}
|
||||
for (let i = 0; i < this.#errors.length; i++) {
|
||||
const error = this.#errors[i]
|
||||
const { message } = error
|
||||
if (!counts[message]) {
|
||||
counts[message] = 0
|
||||
}
|
||||
counts[message]++
|
||||
|
||||
if (counts[message] >= mainErrorCount) {
|
||||
mainError = error
|
||||
mainErrorCount = counts[message]
|
||||
}
|
||||
}
|
||||
}
|
||||
return mainError
|
||||
}
|
||||
|
||||
reset () {
|
||||
this.#attempts = 1
|
||||
this.#timeouts = [...this.#originalTimeouts]
|
||||
}
|
||||
|
||||
stop () {
|
||||
if (this.#timer) {
|
||||
clearTimeout(this.#timer)
|
||||
}
|
||||
|
||||
this.#timeouts = []
|
||||
this.#cachedTimeouts = null
|
||||
}
|
||||
|
||||
retry (err) {
|
||||
this.#errors.push(err)
|
||||
if (new Date().getTime() - this.#operationStart >= this.#maxRetryTime) {
|
||||
// XXX This puts the timeout error first, meaning it will never show as mainError, there may be no way to ever see this
|
||||
this.#errors.unshift(new Error('RetryOperation timeout occurred'))
|
||||
return false
|
||||
}
|
||||
|
||||
let timeout = this.#timeouts.shift()
|
||||
if (timeout === undefined) {
|
||||
// We're out of timeouts, clear the last error and repeat the final timeout
|
||||
if (this.#cachedTimeouts) {
|
||||
this.#errors.pop()
|
||||
timeout = this.#cachedTimeouts.at(-1)
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// TODO what if there already is a timer?
|
||||
this.#timer = setTimeout(() => {
|
||||
this.#attempts++
|
||||
this.#fn(this.#attempts)
|
||||
}, timeout)
|
||||
|
||||
if (this.#unref) {
|
||||
this.#timer.unref()
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
attempt (fn) {
|
||||
this.#fn = fn
|
||||
this.#operationStart = new Date().getTime()
|
||||
this.#fn(this.#attempts)
|
||||
}
|
||||
}
|
||||
module.exports = { RetryOperation }
|
||||
45
node_modules/@gar/promise-retry/package.json
generated
vendored
Normal file
45
node_modules/@gar/promise-retry/package.json
generated
vendored
Normal file
@ -0,0 +1,45 @@
|
||||
{
|
||||
"name": "@gar/promise-retry",
|
||||
"version": "1.0.3",
|
||||
"description": "Retries a function that returns a promise, leveraging the power of the retry module.",
|
||||
"main": "./lib/index.js",
|
||||
"files": [
|
||||
"lib"
|
||||
],
|
||||
"type": "commonjs",
|
||||
"exports": {
|
||||
".": [
|
||||
{
|
||||
"default": "./lib/index.js",
|
||||
"types": "./lib/index.d.ts"
|
||||
},
|
||||
"./lib/index.js"
|
||||
]
|
||||
},
|
||||
"scripts": {
|
||||
"lint": "npx standard",
|
||||
"lint:fix": "npx standard --fix",
|
||||
"test": "node --test --experimental-test-coverage --test-coverage-lines=100 --test-coverage-functions=100 --test-coverage-branches=100",
|
||||
"typelint": "npx -p typescript tsc ./lib/index.d.ts",
|
||||
"posttest": "npm run lint",
|
||||
"postlint": "npm run typelint"
|
||||
},
|
||||
"bugs": {
|
||||
"url": "https://github.com/wraithgar/node-promise-retry/issues/"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git://github.com/wraithgar/node-promise-retry.git"
|
||||
},
|
||||
"keywords": [
|
||||
"retry",
|
||||
"promise",
|
||||
"backoff",
|
||||
"repeat",
|
||||
"replay"
|
||||
],
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "^20.17.0 || >=22.9.0"
|
||||
}
|
||||
}
|
||||
15
node_modules/@isaacs/fs-minipass/LICENSE
generated
vendored
Normal file
15
node_modules/@isaacs/fs-minipass/LICENSE
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
The ISC License
|
||||
|
||||
Copyright (c) Isaac Z. Schlueter and Contributors
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
|
||||
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
71
node_modules/@isaacs/fs-minipass/README.md
generated
vendored
Normal file
71
node_modules/@isaacs/fs-minipass/README.md
generated
vendored
Normal file
@ -0,0 +1,71 @@
|
||||
# fs-minipass
|
||||
|
||||
Filesystem streams based on [minipass](http://npm.im/minipass).
|
||||
|
||||
4 classes are exported:
|
||||
|
||||
- ReadStream
|
||||
- ReadStreamSync
|
||||
- WriteStream
|
||||
- WriteStreamSync
|
||||
|
||||
When using `ReadStreamSync`, all of the data is made available
|
||||
immediately upon consuming the stream. Nothing is buffered in memory
|
||||
when the stream is constructed. If the stream is piped to a writer,
|
||||
then it will synchronously `read()` and emit data into the writer as
|
||||
fast as the writer can consume it. (That is, it will respect
|
||||
backpressure.) If you call `stream.read()` then it will read the
|
||||
entire file and return the contents.
|
||||
|
||||
When using `WriteStreamSync`, every write is flushed to the file
|
||||
synchronously. If your writes all come in a single tick, then it'll
|
||||
write it all out in a single tick. It's as synchronous as you are.
|
||||
|
||||
The async versions work much like their node builtin counterparts,
|
||||
with the exception of introducing significantly less Stream machinery
|
||||
overhead.
|
||||
|
||||
## USAGE
|
||||
|
||||
It's just streams, you pipe them or read() them or write() to them.
|
||||
|
||||
```js
|
||||
import { ReadStream, WriteStream } from 'fs-minipass'
|
||||
// or: const { ReadStream, WriteStream } = require('fs-minipass')
|
||||
const readStream = new ReadStream('file.txt')
|
||||
const writeStream = new WriteStream('output.txt')
|
||||
writeStream.write('some file header or whatever\n')
|
||||
readStream.pipe(writeStream)
|
||||
```
|
||||
|
||||
## ReadStream(path, options)
|
||||
|
||||
Path string is required, but somewhat irrelevant if an open file
|
||||
descriptor is passed in as an option.
|
||||
|
||||
Options:
|
||||
|
||||
- `fd` Pass in a numeric file descriptor, if the file is already open.
|
||||
- `readSize` The size of reads to do, defaults to 16MB
|
||||
- `size` The size of the file, if known. Prevents zero-byte read()
|
||||
call at the end.
|
||||
- `autoClose` Set to `false` to prevent the file descriptor from being
|
||||
closed when the file is done being read.
|
||||
|
||||
## WriteStream(path, options)
|
||||
|
||||
Path string is required, but somewhat irrelevant if an open file
|
||||
descriptor is passed in as an option.
|
||||
|
||||
Options:
|
||||
|
||||
- `fd` Pass in a numeric file descriptor, if the file is already open.
|
||||
- `mode` The mode to create the file with. Defaults to `0o666`.
|
||||
- `start` The position in the file to start reading. If not
|
||||
specified, then the file will start writing at position zero, and be
|
||||
truncated by default.
|
||||
- `autoClose` Set to `false` to prevent the file descriptor from being
|
||||
closed when the stream is ended.
|
||||
- `flags` Flags to use when opening the file. Irrelevant if `fd` is
|
||||
passed in, since file won't be opened in that case. Defaults to
|
||||
`'a'` if a `pos` is specified, or `'w'` otherwise.
|
||||
118
node_modules/@isaacs/fs-minipass/dist/commonjs/index.d.ts
generated
vendored
Normal file
118
node_modules/@isaacs/fs-minipass/dist/commonjs/index.d.ts
generated
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
/// <reference types="node" />
|
||||
/// <reference types="node" />
|
||||
/// <reference types="node" />
|
||||
import EE from 'events';
|
||||
import { Minipass } from 'minipass';
|
||||
declare const _autoClose: unique symbol;
|
||||
declare const _close: unique symbol;
|
||||
declare const _ended: unique symbol;
|
||||
declare const _fd: unique symbol;
|
||||
declare const _finished: unique symbol;
|
||||
declare const _flags: unique symbol;
|
||||
declare const _flush: unique symbol;
|
||||
declare const _handleChunk: unique symbol;
|
||||
declare const _makeBuf: unique symbol;
|
||||
declare const _mode: unique symbol;
|
||||
declare const _needDrain: unique symbol;
|
||||
declare const _onerror: unique symbol;
|
||||
declare const _onopen: unique symbol;
|
||||
declare const _onread: unique symbol;
|
||||
declare const _onwrite: unique symbol;
|
||||
declare const _open: unique symbol;
|
||||
declare const _path: unique symbol;
|
||||
declare const _pos: unique symbol;
|
||||
declare const _queue: unique symbol;
|
||||
declare const _read: unique symbol;
|
||||
declare const _readSize: unique symbol;
|
||||
declare const _reading: unique symbol;
|
||||
declare const _remain: unique symbol;
|
||||
declare const _size: unique symbol;
|
||||
declare const _write: unique symbol;
|
||||
declare const _writing: unique symbol;
|
||||
declare const _defaultFlag: unique symbol;
|
||||
declare const _errored: unique symbol;
|
||||
export type ReadStreamOptions = Minipass.Options<Minipass.ContiguousData> & {
|
||||
fd?: number;
|
||||
readSize?: number;
|
||||
size?: number;
|
||||
autoClose?: boolean;
|
||||
};
|
||||
export type ReadStreamEvents = Minipass.Events<Minipass.ContiguousData> & {
|
||||
open: [fd: number];
|
||||
};
|
||||
export declare class ReadStream extends Minipass<Minipass.ContiguousData, Buffer, ReadStreamEvents> {
|
||||
[_errored]: boolean;
|
||||
[_fd]?: number;
|
||||
[_path]: string;
|
||||
[_readSize]: number;
|
||||
[_reading]: boolean;
|
||||
[_size]: number;
|
||||
[_remain]: number;
|
||||
[_autoClose]: boolean;
|
||||
constructor(path: string, opt: ReadStreamOptions);
|
||||
get fd(): number | undefined;
|
||||
get path(): string;
|
||||
write(): void;
|
||||
end(): void;
|
||||
[_open](): void;
|
||||
[_onopen](er?: NodeJS.ErrnoException | null, fd?: number): void;
|
||||
[_makeBuf](): Buffer;
|
||||
[_read](): void;
|
||||
[_onread](er?: NodeJS.ErrnoException | null, br?: number, buf?: Buffer): void;
|
||||
[_close](): void;
|
||||
[_onerror](er: NodeJS.ErrnoException): void;
|
||||
[_handleChunk](br: number, buf: Buffer): boolean;
|
||||
emit<Event extends keyof ReadStreamEvents>(ev: Event, ...args: ReadStreamEvents[Event]): boolean;
|
||||
}
|
||||
export declare class ReadStreamSync extends ReadStream {
|
||||
[_open](): void;
|
||||
[_read](): void;
|
||||
[_close](): void;
|
||||
}
|
||||
export type WriteStreamOptions = {
|
||||
fd?: number;
|
||||
autoClose?: boolean;
|
||||
mode?: number;
|
||||
captureRejections?: boolean;
|
||||
start?: number;
|
||||
flags?: string;
|
||||
};
|
||||
export declare class WriteStream extends EE {
|
||||
readable: false;
|
||||
writable: boolean;
|
||||
[_errored]: boolean;
|
||||
[_writing]: boolean;
|
||||
[_ended]: boolean;
|
||||
[_queue]: Buffer[];
|
||||
[_needDrain]: boolean;
|
||||
[_path]: string;
|
||||
[_mode]: number;
|
||||
[_autoClose]: boolean;
|
||||
[_fd]?: number;
|
||||
[_defaultFlag]: boolean;
|
||||
[_flags]: string;
|
||||
[_finished]: boolean;
|
||||
[_pos]?: number;
|
||||
constructor(path: string, opt: WriteStreamOptions);
|
||||
emit(ev: string, ...args: any[]): boolean;
|
||||
get fd(): number | undefined;
|
||||
get path(): string;
|
||||
[_onerror](er: NodeJS.ErrnoException): void;
|
||||
[_open](): void;
|
||||
[_onopen](er?: null | NodeJS.ErrnoException, fd?: number): void;
|
||||
end(buf: string, enc?: BufferEncoding): this;
|
||||
end(buf?: Buffer, enc?: undefined): this;
|
||||
write(buf: string, enc?: BufferEncoding): boolean;
|
||||
write(buf: Buffer, enc?: undefined): boolean;
|
||||
[_write](buf: Buffer): void;
|
||||
[_onwrite](er?: null | NodeJS.ErrnoException, bw?: number): void;
|
||||
[_flush](): void;
|
||||
[_close](): void;
|
||||
}
|
||||
export declare class WriteStreamSync extends WriteStream {
|
||||
[_open](): void;
|
||||
[_close](): void;
|
||||
[_write](buf: Buffer): void;
|
||||
}
|
||||
export {};
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
1
node_modules/@isaacs/fs-minipass/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
1
node_modules/@isaacs/fs-minipass/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":";;;AAAA,OAAO,EAAE,MAAM,QAAQ,CAAA;AAEvB,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AAInC,QAAA,MAAM,UAAU,eAAuB,CAAA;AACvC,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,GAAG,eAAgB,CAAA;AACzB,QAAA,MAAM,SAAS,eAAsB,CAAA;AACrC,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,YAAY,eAAyB,CAAA;AAC3C,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,UAAU,eAAuB,CAAA;AACvC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,IAAI,eAAiB,CAAA;AAC3B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,SAAS,eAAsB,CAAA;AACrC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,YAAY,eAAyB,CAAA;AAC3C,QAAA,MAAM,QAAQ,eAAqB,CAAA;AAEnC,MAAM,MAAM,iBAAiB,GAC3B,QAAQ,CAAC,OAAO,CAAC,QAAQ,CAAC,cAAc,CAAC,GAAG;IAC1C,EAAE,CAAC,EAAE,MAAM,CAAA;IACX,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,SAAS,CAAC,EAAE,OAAO,CAAA;CACpB,CAAA;AAEH,MAAM,MAAM,gBAAgB,GAAG,QAAQ,CAAC,MAAM,CAAC,QAAQ,CAAC,cAAc,CAAC,GAAG;IACxE,IAAI,EAAE,CAAC,EAAE,EAAE,MAAM,CAAC,CAAA;CACnB,CAAA;AAED,qBAAa,UAAW,SAAQ,QAAQ,CACtC,QAAQ,CAAC,cAAc,EACvB,MAAM,EACN,gBAAgB,CACjB;IACC,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,GAAG,CAAC,CAAC,EAAE,MAAM,CAAC;IACf,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,SAAS,CAAC,EAAE,MAAM,CAAC;IACpB,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,OAAO,CAAC,EAAE,MAAM,CAAC;IAClB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAA;gBAET,IAAI,EAAE,MAAM,EAAE,GAAG,EAAE,iBAAiB;IA4BhD,IAAI,EAAE,uBAEL;IAED,IAAI,IAAI,WAEP;IAGD,KAAK;IAKL,GAAG;IAIH,CAAC,KAAK,CAAC;IAIP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,MAAM,CAAC,cAAc,GAAG,IAAI,EAAE,EAAE,CAAC,EAAE,MAAM;IAUxD,CAAC,QAAQ,CAAC;IAIV,CAAC,KAAK,CAAC;IAeP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,MAAM,CAAC,cAAc,GAAG,IAAI,EAAE,EAAE,CAAC,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,MAAM;IAStE,CAAC,MAAM,CAAC;IAUR,CAAC,QAAQ,CAAC,CAAC,EAAE,EAAE,MAAM,CAAC,cAAc;IAMpC,CAAC,YAAY,CAAC,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,EAAE,MAAM;IAiBtC,IAAI,CAAC,KAAK,SAAS,MAAM,gBAAgB,EACvC,EAAE,EAAE,KAAK,EACT,GAAG,IAAI,EAAE,gBAAgB,CAAC,KAAK,CAAC,GAC/B,OAAO;CAuBX;AAED,qBAAa,cAAe,SAAQ,UAAU;IAC5C,CAAC,KAAK,CAAC;IAYP,CAAC,KAAK,CAAC;IA2BP,CAAC,MAAM,CAAC;CAQT;AAED,MAAM,MAAM,kBAAkB,GAAG;IAC/B,EAAE,CAAC,EAAE,MAAM,CAAA;IACX,SAAS,CAAC,EAAE,OAAO,CAAA;IACnB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,iBAAiB,CAAC,EAAE,OAAO,CAAA;IAC3B,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,KAAK,CAAC,EAAE,MAAM,CAAA;CACf,CAAA;AAED,qBAAa,WAAY,SAAQ,EAAE;IACjC,QAAQ,EAAE,KAAK,CAAQ;IACvB,QAAQ,EAAE,OAAO,CAAQ;IACzB,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,MAAM,CAAC,EAAE,OAAO,CAAS;IAC1B,CAAC,MAAM,CAAC,EAAE,MAAM,EAAE,CAAM;IACxB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAS;IAC9B,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAC;IACtB,CAAC,GAAG,CAAC,CAAC,EAAE,MAAM,CAAC;IACf,CAAC,YAAY,CAAC,EAAE,OAAO,CAAC;IACxB,CAAC,MAAM,CAAC,EAAE,MAAM,CAAC;IACjB,CAAC,SAAS,CAAC,EAAE,OAAO,CAAS;IAC7B,CAAC,IAAI,CAAC,CAAC,EAAE,MAAM,CAAA;gBAEH,IAAI,EAAE,MAAM,EAAE,GAAG,EAAE,kBAAkB;IAoBjD,IAAI,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,IAAI,EAAE,GAAG,EAAE;IAU/B,IAAI,EAAE,uBAEL;IAED,IAAI,IAAI,WAEP;IAED,CAAC,QAAQ,CAAC,CAAC,EAAE,EAAE,MAAM,CAAC,cAAc;IAMpC,CAAC,KAAK,CAAC;IAMP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,IAAI,GAAG,MAAM,CAAC,cAAc,EAAE,EAAE,CAAC,EAAE,MAAM;IAoBxD,GAAG,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,cAAc,GAAG,IAAI;IAC5C,GAAG,CAAC,GAAG,CAAC,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,SAAS,GAAG,IAAI;IAoBxC,KAAK,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,cAAc,GAAG,OAAO;IACjD,KAAK,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,SAAS,GAAG,OAAO;IAsB5C,CAAC,MAAM,CAAC,CAAC,GAAG,EAAE,MAAM;IAWpB,CAAC,QAAQ,CAAC,CAAC,EAAE,CAAC,EAAE,IAAI,GAAG,MAAM,CAAC,cAAc,EAAE,EAAE,CAAC,EAAE,MAAM;IAwBzD,CAAC,MAAM,CAAC;IAgBR,CAAC,MAAM,CAAC;CAST;AAED,qBAAa,eAAgB,SAAQ,WAAW;IAC9C,CAAC,KAAK,CAAC,IAAI,IAAI;IAsBf,CAAC,MAAM,CAAC;IASR,CAAC,MAAM,CAAC,CAAC,GAAG,EAAE,MAAM;CAmBrB"}
|
||||
430
node_modules/@isaacs/fs-minipass/dist/commonjs/index.js
generated
vendored
Normal file
430
node_modules/@isaacs/fs-minipass/dist/commonjs/index.js
generated
vendored
Normal file
@ -0,0 +1,430 @@
|
||||
"use strict";
|
||||
var __importDefault = (this && this.__importDefault) || function (mod) {
|
||||
return (mod && mod.__esModule) ? mod : { "default": mod };
|
||||
};
|
||||
Object.defineProperty(exports, "__esModule", { value: true });
|
||||
exports.WriteStreamSync = exports.WriteStream = exports.ReadStreamSync = exports.ReadStream = void 0;
|
||||
const events_1 = __importDefault(require("events"));
|
||||
const fs_1 = __importDefault(require("fs"));
|
||||
const minipass_1 = require("minipass");
|
||||
const writev = fs_1.default.writev;
|
||||
const _autoClose = Symbol('_autoClose');
|
||||
const _close = Symbol('_close');
|
||||
const _ended = Symbol('_ended');
|
||||
const _fd = Symbol('_fd');
|
||||
const _finished = Symbol('_finished');
|
||||
const _flags = Symbol('_flags');
|
||||
const _flush = Symbol('_flush');
|
||||
const _handleChunk = Symbol('_handleChunk');
|
||||
const _makeBuf = Symbol('_makeBuf');
|
||||
const _mode = Symbol('_mode');
|
||||
const _needDrain = Symbol('_needDrain');
|
||||
const _onerror = Symbol('_onerror');
|
||||
const _onopen = Symbol('_onopen');
|
||||
const _onread = Symbol('_onread');
|
||||
const _onwrite = Symbol('_onwrite');
|
||||
const _open = Symbol('_open');
|
||||
const _path = Symbol('_path');
|
||||
const _pos = Symbol('_pos');
|
||||
const _queue = Symbol('_queue');
|
||||
const _read = Symbol('_read');
|
||||
const _readSize = Symbol('_readSize');
|
||||
const _reading = Symbol('_reading');
|
||||
const _remain = Symbol('_remain');
|
||||
const _size = Symbol('_size');
|
||||
const _write = Symbol('_write');
|
||||
const _writing = Symbol('_writing');
|
||||
const _defaultFlag = Symbol('_defaultFlag');
|
||||
const _errored = Symbol('_errored');
|
||||
class ReadStream extends minipass_1.Minipass {
|
||||
[_errored] = false;
|
||||
[_fd];
|
||||
[_path];
|
||||
[_readSize];
|
||||
[_reading] = false;
|
||||
[_size];
|
||||
[_remain];
|
||||
[_autoClose];
|
||||
constructor(path, opt) {
|
||||
opt = opt || {};
|
||||
super(opt);
|
||||
this.readable = true;
|
||||
this.writable = false;
|
||||
if (typeof path !== 'string') {
|
||||
throw new TypeError('path must be a string');
|
||||
}
|
||||
this[_errored] = false;
|
||||
this[_fd] = typeof opt.fd === 'number' ? opt.fd : undefined;
|
||||
this[_path] = path;
|
||||
this[_readSize] = opt.readSize || 16 * 1024 * 1024;
|
||||
this[_reading] = false;
|
||||
this[_size] = typeof opt.size === 'number' ? opt.size : Infinity;
|
||||
this[_remain] = this[_size];
|
||||
this[_autoClose] =
|
||||
typeof opt.autoClose === 'boolean' ? opt.autoClose : true;
|
||||
if (typeof this[_fd] === 'number') {
|
||||
this[_read]();
|
||||
}
|
||||
else {
|
||||
this[_open]();
|
||||
}
|
||||
}
|
||||
get fd() {
|
||||
return this[_fd];
|
||||
}
|
||||
get path() {
|
||||
return this[_path];
|
||||
}
|
||||
//@ts-ignore
|
||||
write() {
|
||||
throw new TypeError('this is a readable stream');
|
||||
}
|
||||
//@ts-ignore
|
||||
end() {
|
||||
throw new TypeError('this is a readable stream');
|
||||
}
|
||||
[_open]() {
|
||||
fs_1.default.open(this[_path], 'r', (er, fd) => this[_onopen](er, fd));
|
||||
}
|
||||
[_onopen](er, fd) {
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
this[_fd] = fd;
|
||||
this.emit('open', fd);
|
||||
this[_read]();
|
||||
}
|
||||
}
|
||||
[_makeBuf]() {
|
||||
return Buffer.allocUnsafe(Math.min(this[_readSize], this[_remain]));
|
||||
}
|
||||
[_read]() {
|
||||
if (!this[_reading]) {
|
||||
this[_reading] = true;
|
||||
const buf = this[_makeBuf]();
|
||||
/* c8 ignore start */
|
||||
if (buf.length === 0) {
|
||||
return process.nextTick(() => this[_onread](null, 0, buf));
|
||||
}
|
||||
/* c8 ignore stop */
|
||||
fs_1.default.read(this[_fd], buf, 0, buf.length, null, (er, br, b) => this[_onread](er, br, b));
|
||||
}
|
||||
}
|
||||
[_onread](er, br, buf) {
|
||||
this[_reading] = false;
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else if (this[_handleChunk](br, buf)) {
|
||||
this[_read]();
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs_1.default.close(fd, er => er ? this.emit('error', er) : this.emit('close'));
|
||||
}
|
||||
}
|
||||
[_onerror](er) {
|
||||
this[_reading] = true;
|
||||
this[_close]();
|
||||
this.emit('error', er);
|
||||
}
|
||||
[_handleChunk](br, buf) {
|
||||
let ret = false;
|
||||
// no effect if infinite
|
||||
this[_remain] -= br;
|
||||
if (br > 0) {
|
||||
ret = super.write(br < buf.length ? buf.subarray(0, br) : buf);
|
||||
}
|
||||
if (br === 0 || this[_remain] <= 0) {
|
||||
ret = false;
|
||||
this[_close]();
|
||||
super.end();
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
emit(ev, ...args) {
|
||||
switch (ev) {
|
||||
case 'prefinish':
|
||||
case 'finish':
|
||||
return false;
|
||||
case 'drain':
|
||||
if (typeof this[_fd] === 'number') {
|
||||
this[_read]();
|
||||
}
|
||||
return false;
|
||||
case 'error':
|
||||
if (this[_errored]) {
|
||||
return false;
|
||||
}
|
||||
this[_errored] = true;
|
||||
return super.emit(ev, ...args);
|
||||
default:
|
||||
return super.emit(ev, ...args);
|
||||
}
|
||||
}
|
||||
}
|
||||
exports.ReadStream = ReadStream;
|
||||
class ReadStreamSync extends ReadStream {
|
||||
[_open]() {
|
||||
let threw = true;
|
||||
try {
|
||||
this[_onopen](null, fs_1.default.openSync(this[_path], 'r'));
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
this[_close]();
|
||||
}
|
||||
}
|
||||
}
|
||||
[_read]() {
|
||||
let threw = true;
|
||||
try {
|
||||
if (!this[_reading]) {
|
||||
this[_reading] = true;
|
||||
do {
|
||||
const buf = this[_makeBuf]();
|
||||
/* c8 ignore start */
|
||||
const br = buf.length === 0
|
||||
? 0
|
||||
: fs_1.default.readSync(this[_fd], buf, 0, buf.length, null);
|
||||
/* c8 ignore stop */
|
||||
if (!this[_handleChunk](br, buf)) {
|
||||
break;
|
||||
}
|
||||
} while (true);
|
||||
this[_reading] = false;
|
||||
}
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
this[_close]();
|
||||
}
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs_1.default.closeSync(fd);
|
||||
this.emit('close');
|
||||
}
|
||||
}
|
||||
}
|
||||
exports.ReadStreamSync = ReadStreamSync;
|
||||
class WriteStream extends events_1.default {
|
||||
readable = false;
|
||||
writable = true;
|
||||
[_errored] = false;
|
||||
[_writing] = false;
|
||||
[_ended] = false;
|
||||
[_queue] = [];
|
||||
[_needDrain] = false;
|
||||
[_path];
|
||||
[_mode];
|
||||
[_autoClose];
|
||||
[_fd];
|
||||
[_defaultFlag];
|
||||
[_flags];
|
||||
[_finished] = false;
|
||||
[_pos];
|
||||
constructor(path, opt) {
|
||||
opt = opt || {};
|
||||
super(opt);
|
||||
this[_path] = path;
|
||||
this[_fd] = typeof opt.fd === 'number' ? opt.fd : undefined;
|
||||
this[_mode] = opt.mode === undefined ? 0o666 : opt.mode;
|
||||
this[_pos] = typeof opt.start === 'number' ? opt.start : undefined;
|
||||
this[_autoClose] =
|
||||
typeof opt.autoClose === 'boolean' ? opt.autoClose : true;
|
||||
// truncating makes no sense when writing into the middle
|
||||
const defaultFlag = this[_pos] !== undefined ? 'r+' : 'w';
|
||||
this[_defaultFlag] = opt.flags === undefined;
|
||||
this[_flags] = opt.flags === undefined ? defaultFlag : opt.flags;
|
||||
if (this[_fd] === undefined) {
|
||||
this[_open]();
|
||||
}
|
||||
}
|
||||
emit(ev, ...args) {
|
||||
if (ev === 'error') {
|
||||
if (this[_errored]) {
|
||||
return false;
|
||||
}
|
||||
this[_errored] = true;
|
||||
}
|
||||
return super.emit(ev, ...args);
|
||||
}
|
||||
get fd() {
|
||||
return this[_fd];
|
||||
}
|
||||
get path() {
|
||||
return this[_path];
|
||||
}
|
||||
[_onerror](er) {
|
||||
this[_close]();
|
||||
this[_writing] = true;
|
||||
this.emit('error', er);
|
||||
}
|
||||
[_open]() {
|
||||
fs_1.default.open(this[_path], this[_flags], this[_mode], (er, fd) => this[_onopen](er, fd));
|
||||
}
|
||||
[_onopen](er, fd) {
|
||||
if (this[_defaultFlag] &&
|
||||
this[_flags] === 'r+' &&
|
||||
er &&
|
||||
er.code === 'ENOENT') {
|
||||
this[_flags] = 'w';
|
||||
this[_open]();
|
||||
}
|
||||
else if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
this[_fd] = fd;
|
||||
this.emit('open', fd);
|
||||
if (!this[_writing]) {
|
||||
this[_flush]();
|
||||
}
|
||||
}
|
||||
}
|
||||
end(buf, enc) {
|
||||
if (buf) {
|
||||
//@ts-ignore
|
||||
this.write(buf, enc);
|
||||
}
|
||||
this[_ended] = true;
|
||||
// synthetic after-write logic, where drain/finish live
|
||||
if (!this[_writing] &&
|
||||
!this[_queue].length &&
|
||||
typeof this[_fd] === 'number') {
|
||||
this[_onwrite](null, 0);
|
||||
}
|
||||
return this;
|
||||
}
|
||||
write(buf, enc) {
|
||||
if (typeof buf === 'string') {
|
||||
buf = Buffer.from(buf, enc);
|
||||
}
|
||||
if (this[_ended]) {
|
||||
this.emit('error', new Error('write() after end()'));
|
||||
return false;
|
||||
}
|
||||
if (this[_fd] === undefined || this[_writing] || this[_queue].length) {
|
||||
this[_queue].push(buf);
|
||||
this[_needDrain] = true;
|
||||
return false;
|
||||
}
|
||||
this[_writing] = true;
|
||||
this[_write](buf);
|
||||
return true;
|
||||
}
|
||||
[_write](buf) {
|
||||
fs_1.default.write(this[_fd], buf, 0, buf.length, this[_pos], (er, bw) => this[_onwrite](er, bw));
|
||||
}
|
||||
[_onwrite](er, bw) {
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
if (this[_pos] !== undefined && typeof bw === 'number') {
|
||||
this[_pos] += bw;
|
||||
}
|
||||
if (this[_queue].length) {
|
||||
this[_flush]();
|
||||
}
|
||||
else {
|
||||
this[_writing] = false;
|
||||
if (this[_ended] && !this[_finished]) {
|
||||
this[_finished] = true;
|
||||
this[_close]();
|
||||
this.emit('finish');
|
||||
}
|
||||
else if (this[_needDrain]) {
|
||||
this[_needDrain] = false;
|
||||
this.emit('drain');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
[_flush]() {
|
||||
if (this[_queue].length === 0) {
|
||||
if (this[_ended]) {
|
||||
this[_onwrite](null, 0);
|
||||
}
|
||||
}
|
||||
else if (this[_queue].length === 1) {
|
||||
this[_write](this[_queue].pop());
|
||||
}
|
||||
else {
|
||||
const iovec = this[_queue];
|
||||
this[_queue] = [];
|
||||
writev(this[_fd], iovec, this[_pos], (er, bw) => this[_onwrite](er, bw));
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs_1.default.close(fd, er => er ? this.emit('error', er) : this.emit('close'));
|
||||
}
|
||||
}
|
||||
}
|
||||
exports.WriteStream = WriteStream;
|
||||
class WriteStreamSync extends WriteStream {
|
||||
[_open]() {
|
||||
let fd;
|
||||
// only wrap in a try{} block if we know we'll retry, to avoid
|
||||
// the rethrow obscuring the error's source frame in most cases.
|
||||
if (this[_defaultFlag] && this[_flags] === 'r+') {
|
||||
try {
|
||||
fd = fs_1.default.openSync(this[_path], this[_flags], this[_mode]);
|
||||
}
|
||||
catch (er) {
|
||||
if (er?.code === 'ENOENT') {
|
||||
this[_flags] = 'w';
|
||||
return this[_open]();
|
||||
}
|
||||
else {
|
||||
throw er;
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
fd = fs_1.default.openSync(this[_path], this[_flags], this[_mode]);
|
||||
}
|
||||
this[_onopen](null, fd);
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs_1.default.closeSync(fd);
|
||||
this.emit('close');
|
||||
}
|
||||
}
|
||||
[_write](buf) {
|
||||
// throw the original, but try to close if it fails
|
||||
let threw = true;
|
||||
try {
|
||||
this[_onwrite](null, fs_1.default.writeSync(this[_fd], buf, 0, buf.length, this[_pos]));
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
try {
|
||||
this[_close]();
|
||||
}
|
||||
catch {
|
||||
// ok error
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
exports.WriteStreamSync = WriteStreamSync;
|
||||
//# sourceMappingURL=index.js.map
|
||||
1
node_modules/@isaacs/fs-minipass/dist/commonjs/index.js.map
generated
vendored
Normal file
1
node_modules/@isaacs/fs-minipass/dist/commonjs/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
node_modules/@isaacs/fs-minipass/dist/commonjs/package.json
generated
vendored
Normal file
3
node_modules/@isaacs/fs-minipass/dist/commonjs/package.json
generated
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "commonjs"
|
||||
}
|
||||
118
node_modules/@isaacs/fs-minipass/dist/esm/index.d.ts
generated
vendored
Normal file
118
node_modules/@isaacs/fs-minipass/dist/esm/index.d.ts
generated
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
import EE from 'events';
|
||||
import { Minipass } from 'minipass';
|
||||
declare const _autoClose: unique symbol;
|
||||
declare const _close: unique symbol;
|
||||
declare const _ended: unique symbol;
|
||||
declare const _fd: unique symbol;
|
||||
declare const _finished: unique symbol;
|
||||
declare const _flags: unique symbol;
|
||||
declare const _flush: unique symbol;
|
||||
declare const _handleChunk: unique symbol;
|
||||
declare const _makeBuf: unique symbol;
|
||||
declare const _mode: unique symbol;
|
||||
declare const _needDrain: unique symbol;
|
||||
declare const _onerror: unique symbol;
|
||||
declare const _onopen: unique symbol;
|
||||
declare const _onread: unique symbol;
|
||||
declare const _onwrite: unique symbol;
|
||||
declare const _open: unique symbol;
|
||||
declare const _path: unique symbol;
|
||||
declare const _pos: unique symbol;
|
||||
declare const _queue: unique symbol;
|
||||
declare const _read: unique symbol;
|
||||
declare const _readSize: unique symbol;
|
||||
declare const _reading: unique symbol;
|
||||
declare const _remain: unique symbol;
|
||||
declare const _size: unique symbol;
|
||||
declare const _write: unique symbol;
|
||||
declare const _writing: unique symbol;
|
||||
declare const _defaultFlag: unique symbol;
|
||||
declare const _errored: unique symbol;
|
||||
export type ReadStreamOptions = Minipass.Options<Minipass.ContiguousData> & {
|
||||
fd?: number;
|
||||
readSize?: number;
|
||||
size?: number;
|
||||
autoClose?: boolean;
|
||||
};
|
||||
export type ReadStreamEvents = Minipass.Events<Minipass.ContiguousData> & {
|
||||
open: [fd: number];
|
||||
};
|
||||
export declare class ReadStream extends Minipass<Minipass.ContiguousData, Buffer, ReadStreamEvents> {
|
||||
[_errored]: boolean;
|
||||
[_fd]?: number;
|
||||
[_path]: string;
|
||||
[_readSize]: number;
|
||||
[_reading]: boolean;
|
||||
[_size]: number;
|
||||
[_remain]: number;
|
||||
[_autoClose]: boolean;
|
||||
constructor(path: string, opt: ReadStreamOptions);
|
||||
get fd(): number | undefined;
|
||||
get path(): string;
|
||||
write(): void;
|
||||
end(): void;
|
||||
[_open](): void;
|
||||
[_onopen](er?: NodeJS.ErrnoException | null, fd?: number): void;
|
||||
[_makeBuf](): Buffer;
|
||||
[_read](): void;
|
||||
[_onread](er?: NodeJS.ErrnoException | null, br?: number, buf?: Buffer): void;
|
||||
[_close](): void;
|
||||
[_onerror](er: NodeJS.ErrnoException): void;
|
||||
[_handleChunk](br: number, buf: Buffer): boolean;
|
||||
emit<Event extends keyof ReadStreamEvents>(ev: Event, ...args: ReadStreamEvents[Event]): boolean;
|
||||
}
|
||||
export declare class ReadStreamSync extends ReadStream {
|
||||
[_open](): void;
|
||||
[_read](): void;
|
||||
[_close](): void;
|
||||
}
|
||||
export type WriteStreamOptions = {
|
||||
fd?: number;
|
||||
autoClose?: boolean;
|
||||
mode?: number;
|
||||
captureRejections?: boolean;
|
||||
start?: number;
|
||||
flags?: string;
|
||||
};
|
||||
export declare class WriteStream extends EE {
|
||||
readable: false;
|
||||
writable: boolean;
|
||||
[_errored]: boolean;
|
||||
[_writing]: boolean;
|
||||
[_ended]: boolean;
|
||||
[_queue]: Buffer[];
|
||||
[_needDrain]: boolean;
|
||||
[_path]: string;
|
||||
[_mode]: number;
|
||||
[_autoClose]: boolean;
|
||||
[_fd]?: number;
|
||||
[_defaultFlag]: boolean;
|
||||
[_flags]: string;
|
||||
[_finished]: boolean;
|
||||
[_pos]?: number;
|
||||
constructor(path: string, opt: WriteStreamOptions);
|
||||
emit(ev: string, ...args: any[]): boolean;
|
||||
get fd(): number | undefined;
|
||||
get path(): string;
|
||||
[_onerror](er: NodeJS.ErrnoException): void;
|
||||
[_open](): void;
|
||||
[_onopen](er?: null | NodeJS.ErrnoException, fd?: number): void;
|
||||
end(buf: string, enc?: BufferEncoding): this;
|
||||
end(buf?: Buffer, enc?: undefined): this;
|
||||
write(buf: string, enc?: BufferEncoding): boolean;
|
||||
write(buf: Buffer, enc?: undefined): boolean;
|
||||
[_write](buf: Buffer): void;
|
||||
[_onwrite](er?: null | NodeJS.ErrnoException, bw?: number): void;
|
||||
[_flush](): void;
|
||||
[_close](): void;
|
||||
}
|
||||
export declare class WriteStreamSync extends WriteStream {
|
||||
[_open](): void;
|
||||
[_close](): void;
|
||||
[_write](buf: Buffer): void;
|
||||
}
|
||||
export {};
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
1
node_modules/@isaacs/fs-minipass/dist/esm/index.d.ts.map
generated
vendored
Normal file
1
node_modules/@isaacs/fs-minipass/dist/esm/index.d.ts.map
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":";;;AAAA,OAAO,EAAE,MAAM,QAAQ,CAAA;AAEvB,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AAInC,QAAA,MAAM,UAAU,eAAuB,CAAA;AACvC,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,GAAG,eAAgB,CAAA;AACzB,QAAA,MAAM,SAAS,eAAsB,CAAA;AACrC,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,YAAY,eAAyB,CAAA;AAC3C,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,UAAU,eAAuB,CAAA;AACvC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,IAAI,eAAiB,CAAA;AAC3B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,SAAS,eAAsB,CAAA;AACrC,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,OAAO,eAAoB,CAAA;AACjC,QAAA,MAAM,KAAK,eAAkB,CAAA;AAC7B,QAAA,MAAM,MAAM,eAAmB,CAAA;AAC/B,QAAA,MAAM,QAAQ,eAAqB,CAAA;AACnC,QAAA,MAAM,YAAY,eAAyB,CAAA;AAC3C,QAAA,MAAM,QAAQ,eAAqB,CAAA;AAEnC,MAAM,MAAM,iBAAiB,GAC3B,QAAQ,CAAC,OAAO,CAAC,QAAQ,CAAC,cAAc,CAAC,GAAG;IAC1C,EAAE,CAAC,EAAE,MAAM,CAAA;IACX,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,SAAS,CAAC,EAAE,OAAO,CAAA;CACpB,CAAA;AAEH,MAAM,MAAM,gBAAgB,GAAG,QAAQ,CAAC,MAAM,CAAC,QAAQ,CAAC,cAAc,CAAC,GAAG;IACxE,IAAI,EAAE,CAAC,EAAE,EAAE,MAAM,CAAC,CAAA;CACnB,CAAA;AAED,qBAAa,UAAW,SAAQ,QAAQ,CACtC,QAAQ,CAAC,cAAc,EACvB,MAAM,EACN,gBAAgB,CACjB;IACC,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,GAAG,CAAC,CAAC,EAAE,MAAM,CAAC;IACf,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,SAAS,CAAC,EAAE,MAAM,CAAC;IACpB,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,OAAO,CAAC,EAAE,MAAM,CAAC;IAClB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAA;gBAET,IAAI,EAAE,MAAM,EAAE,GAAG,EAAE,iBAAiB;IA4BhD,IAAI,EAAE,uBAEL;IAED,IAAI,IAAI,WAEP;IAGD,KAAK;IAKL,GAAG;IAIH,CAAC,KAAK,CAAC;IAIP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,MAAM,CAAC,cAAc,GAAG,IAAI,EAAE,EAAE,CAAC,EAAE,MAAM;IAUxD,CAAC,QAAQ,CAAC;IAIV,CAAC,KAAK,CAAC;IAeP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,MAAM,CAAC,cAAc,GAAG,IAAI,EAAE,EAAE,CAAC,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,MAAM;IAStE,CAAC,MAAM,CAAC;IAUR,CAAC,QAAQ,CAAC,CAAC,EAAE,EAAE,MAAM,CAAC,cAAc;IAMpC,CAAC,YAAY,CAAC,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,EAAE,MAAM;IAiBtC,IAAI,CAAC,KAAK,SAAS,MAAM,gBAAgB,EACvC,EAAE,EAAE,KAAK,EACT,GAAG,IAAI,EAAE,gBAAgB,CAAC,KAAK,CAAC,GAC/B,OAAO;CAuBX;AAED,qBAAa,cAAe,SAAQ,UAAU;IAC5C,CAAC,KAAK,CAAC;IAYP,CAAC,KAAK,CAAC;IA2BP,CAAC,MAAM,CAAC;CAQT;AAED,MAAM,MAAM,kBAAkB,GAAG;IAC/B,EAAE,CAAC,EAAE,MAAM,CAAA;IACX,SAAS,CAAC,EAAE,OAAO,CAAA;IACnB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,iBAAiB,CAAC,EAAE,OAAO,CAAA;IAC3B,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,KAAK,CAAC,EAAE,MAAM,CAAA;CACf,CAAA;AAED,qBAAa,WAAY,SAAQ,EAAE;IACjC,QAAQ,EAAE,KAAK,CAAQ;IACvB,QAAQ,EAAE,OAAO,CAAQ;IACzB,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,QAAQ,CAAC,EAAE,OAAO,CAAS;IAC5B,CAAC,MAAM,CAAC,EAAE,OAAO,CAAS;IAC1B,CAAC,MAAM,CAAC,EAAE,MAAM,EAAE,CAAM;IACxB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAS;IAC9B,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,KAAK,CAAC,EAAE,MAAM,CAAC;IAChB,CAAC,UAAU,CAAC,EAAE,OAAO,CAAC;IACtB,CAAC,GAAG,CAAC,CAAC,EAAE,MAAM,CAAC;IACf,CAAC,YAAY,CAAC,EAAE,OAAO,CAAC;IACxB,CAAC,MAAM,CAAC,EAAE,MAAM,CAAC;IACjB,CAAC,SAAS,CAAC,EAAE,OAAO,CAAS;IAC7B,CAAC,IAAI,CAAC,CAAC,EAAE,MAAM,CAAA;gBAEH,IAAI,EAAE,MAAM,EAAE,GAAG,EAAE,kBAAkB;IAoBjD,IAAI,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,IAAI,EAAE,GAAG,EAAE;IAU/B,IAAI,EAAE,uBAEL;IAED,IAAI,IAAI,WAEP;IAED,CAAC,QAAQ,CAAC,CAAC,EAAE,EAAE,MAAM,CAAC,cAAc;IAMpC,CAAC,KAAK,CAAC;IAMP,CAAC,OAAO,CAAC,CAAC,EAAE,CAAC,EAAE,IAAI,GAAG,MAAM,CAAC,cAAc,EAAE,EAAE,CAAC,EAAE,MAAM;IAoBxD,GAAG,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,cAAc,GAAG,IAAI;IAC5C,GAAG,CAAC,GAAG,CAAC,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,SAAS,GAAG,IAAI;IAoBxC,KAAK,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,cAAc,GAAG,OAAO;IACjD,KAAK,CAAC,GAAG,EAAE,MAAM,EAAE,GAAG,CAAC,EAAE,SAAS,GAAG,OAAO;IAsB5C,CAAC,MAAM,CAAC,CAAC,GAAG,EAAE,MAAM;IAWpB,CAAC,QAAQ,CAAC,CAAC,EAAE,CAAC,EAAE,IAAI,GAAG,MAAM,CAAC,cAAc,EAAE,EAAE,CAAC,EAAE,MAAM;IAwBzD,CAAC,MAAM,CAAC;IAgBR,CAAC,MAAM,CAAC;CAST;AAED,qBAAa,eAAgB,SAAQ,WAAW;IAC9C,CAAC,KAAK,CAAC,IAAI,IAAI;IAsBf,CAAC,MAAM,CAAC;IASR,CAAC,MAAM,CAAC,CAAC,GAAG,EAAE,MAAM;CAmBrB"}
|
||||
420
node_modules/@isaacs/fs-minipass/dist/esm/index.js
generated
vendored
Normal file
420
node_modules/@isaacs/fs-minipass/dist/esm/index.js
generated
vendored
Normal file
@ -0,0 +1,420 @@
|
||||
import EE from 'events';
|
||||
import fs from 'fs';
|
||||
import { Minipass } from 'minipass';
|
||||
const writev = fs.writev;
|
||||
const _autoClose = Symbol('_autoClose');
|
||||
const _close = Symbol('_close');
|
||||
const _ended = Symbol('_ended');
|
||||
const _fd = Symbol('_fd');
|
||||
const _finished = Symbol('_finished');
|
||||
const _flags = Symbol('_flags');
|
||||
const _flush = Symbol('_flush');
|
||||
const _handleChunk = Symbol('_handleChunk');
|
||||
const _makeBuf = Symbol('_makeBuf');
|
||||
const _mode = Symbol('_mode');
|
||||
const _needDrain = Symbol('_needDrain');
|
||||
const _onerror = Symbol('_onerror');
|
||||
const _onopen = Symbol('_onopen');
|
||||
const _onread = Symbol('_onread');
|
||||
const _onwrite = Symbol('_onwrite');
|
||||
const _open = Symbol('_open');
|
||||
const _path = Symbol('_path');
|
||||
const _pos = Symbol('_pos');
|
||||
const _queue = Symbol('_queue');
|
||||
const _read = Symbol('_read');
|
||||
const _readSize = Symbol('_readSize');
|
||||
const _reading = Symbol('_reading');
|
||||
const _remain = Symbol('_remain');
|
||||
const _size = Symbol('_size');
|
||||
const _write = Symbol('_write');
|
||||
const _writing = Symbol('_writing');
|
||||
const _defaultFlag = Symbol('_defaultFlag');
|
||||
const _errored = Symbol('_errored');
|
||||
export class ReadStream extends Minipass {
|
||||
[_errored] = false;
|
||||
[_fd];
|
||||
[_path];
|
||||
[_readSize];
|
||||
[_reading] = false;
|
||||
[_size];
|
||||
[_remain];
|
||||
[_autoClose];
|
||||
constructor(path, opt) {
|
||||
opt = opt || {};
|
||||
super(opt);
|
||||
this.readable = true;
|
||||
this.writable = false;
|
||||
if (typeof path !== 'string') {
|
||||
throw new TypeError('path must be a string');
|
||||
}
|
||||
this[_errored] = false;
|
||||
this[_fd] = typeof opt.fd === 'number' ? opt.fd : undefined;
|
||||
this[_path] = path;
|
||||
this[_readSize] = opt.readSize || 16 * 1024 * 1024;
|
||||
this[_reading] = false;
|
||||
this[_size] = typeof opt.size === 'number' ? opt.size : Infinity;
|
||||
this[_remain] = this[_size];
|
||||
this[_autoClose] =
|
||||
typeof opt.autoClose === 'boolean' ? opt.autoClose : true;
|
||||
if (typeof this[_fd] === 'number') {
|
||||
this[_read]();
|
||||
}
|
||||
else {
|
||||
this[_open]();
|
||||
}
|
||||
}
|
||||
get fd() {
|
||||
return this[_fd];
|
||||
}
|
||||
get path() {
|
||||
return this[_path];
|
||||
}
|
||||
//@ts-ignore
|
||||
write() {
|
||||
throw new TypeError('this is a readable stream');
|
||||
}
|
||||
//@ts-ignore
|
||||
end() {
|
||||
throw new TypeError('this is a readable stream');
|
||||
}
|
||||
[_open]() {
|
||||
fs.open(this[_path], 'r', (er, fd) => this[_onopen](er, fd));
|
||||
}
|
||||
[_onopen](er, fd) {
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
this[_fd] = fd;
|
||||
this.emit('open', fd);
|
||||
this[_read]();
|
||||
}
|
||||
}
|
||||
[_makeBuf]() {
|
||||
return Buffer.allocUnsafe(Math.min(this[_readSize], this[_remain]));
|
||||
}
|
||||
[_read]() {
|
||||
if (!this[_reading]) {
|
||||
this[_reading] = true;
|
||||
const buf = this[_makeBuf]();
|
||||
/* c8 ignore start */
|
||||
if (buf.length === 0) {
|
||||
return process.nextTick(() => this[_onread](null, 0, buf));
|
||||
}
|
||||
/* c8 ignore stop */
|
||||
fs.read(this[_fd], buf, 0, buf.length, null, (er, br, b) => this[_onread](er, br, b));
|
||||
}
|
||||
}
|
||||
[_onread](er, br, buf) {
|
||||
this[_reading] = false;
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else if (this[_handleChunk](br, buf)) {
|
||||
this[_read]();
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs.close(fd, er => er ? this.emit('error', er) : this.emit('close'));
|
||||
}
|
||||
}
|
||||
[_onerror](er) {
|
||||
this[_reading] = true;
|
||||
this[_close]();
|
||||
this.emit('error', er);
|
||||
}
|
||||
[_handleChunk](br, buf) {
|
||||
let ret = false;
|
||||
// no effect if infinite
|
||||
this[_remain] -= br;
|
||||
if (br > 0) {
|
||||
ret = super.write(br < buf.length ? buf.subarray(0, br) : buf);
|
||||
}
|
||||
if (br === 0 || this[_remain] <= 0) {
|
||||
ret = false;
|
||||
this[_close]();
|
||||
super.end();
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
emit(ev, ...args) {
|
||||
switch (ev) {
|
||||
case 'prefinish':
|
||||
case 'finish':
|
||||
return false;
|
||||
case 'drain':
|
||||
if (typeof this[_fd] === 'number') {
|
||||
this[_read]();
|
||||
}
|
||||
return false;
|
||||
case 'error':
|
||||
if (this[_errored]) {
|
||||
return false;
|
||||
}
|
||||
this[_errored] = true;
|
||||
return super.emit(ev, ...args);
|
||||
default:
|
||||
return super.emit(ev, ...args);
|
||||
}
|
||||
}
|
||||
}
|
||||
export class ReadStreamSync extends ReadStream {
|
||||
[_open]() {
|
||||
let threw = true;
|
||||
try {
|
||||
this[_onopen](null, fs.openSync(this[_path], 'r'));
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
this[_close]();
|
||||
}
|
||||
}
|
||||
}
|
||||
[_read]() {
|
||||
let threw = true;
|
||||
try {
|
||||
if (!this[_reading]) {
|
||||
this[_reading] = true;
|
||||
do {
|
||||
const buf = this[_makeBuf]();
|
||||
/* c8 ignore start */
|
||||
const br = buf.length === 0
|
||||
? 0
|
||||
: fs.readSync(this[_fd], buf, 0, buf.length, null);
|
||||
/* c8 ignore stop */
|
||||
if (!this[_handleChunk](br, buf)) {
|
||||
break;
|
||||
}
|
||||
} while (true);
|
||||
this[_reading] = false;
|
||||
}
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
this[_close]();
|
||||
}
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs.closeSync(fd);
|
||||
this.emit('close');
|
||||
}
|
||||
}
|
||||
}
|
||||
export class WriteStream extends EE {
|
||||
readable = false;
|
||||
writable = true;
|
||||
[_errored] = false;
|
||||
[_writing] = false;
|
||||
[_ended] = false;
|
||||
[_queue] = [];
|
||||
[_needDrain] = false;
|
||||
[_path];
|
||||
[_mode];
|
||||
[_autoClose];
|
||||
[_fd];
|
||||
[_defaultFlag];
|
||||
[_flags];
|
||||
[_finished] = false;
|
||||
[_pos];
|
||||
constructor(path, opt) {
|
||||
opt = opt || {};
|
||||
super(opt);
|
||||
this[_path] = path;
|
||||
this[_fd] = typeof opt.fd === 'number' ? opt.fd : undefined;
|
||||
this[_mode] = opt.mode === undefined ? 0o666 : opt.mode;
|
||||
this[_pos] = typeof opt.start === 'number' ? opt.start : undefined;
|
||||
this[_autoClose] =
|
||||
typeof opt.autoClose === 'boolean' ? opt.autoClose : true;
|
||||
// truncating makes no sense when writing into the middle
|
||||
const defaultFlag = this[_pos] !== undefined ? 'r+' : 'w';
|
||||
this[_defaultFlag] = opt.flags === undefined;
|
||||
this[_flags] = opt.flags === undefined ? defaultFlag : opt.flags;
|
||||
if (this[_fd] === undefined) {
|
||||
this[_open]();
|
||||
}
|
||||
}
|
||||
emit(ev, ...args) {
|
||||
if (ev === 'error') {
|
||||
if (this[_errored]) {
|
||||
return false;
|
||||
}
|
||||
this[_errored] = true;
|
||||
}
|
||||
return super.emit(ev, ...args);
|
||||
}
|
||||
get fd() {
|
||||
return this[_fd];
|
||||
}
|
||||
get path() {
|
||||
return this[_path];
|
||||
}
|
||||
[_onerror](er) {
|
||||
this[_close]();
|
||||
this[_writing] = true;
|
||||
this.emit('error', er);
|
||||
}
|
||||
[_open]() {
|
||||
fs.open(this[_path], this[_flags], this[_mode], (er, fd) => this[_onopen](er, fd));
|
||||
}
|
||||
[_onopen](er, fd) {
|
||||
if (this[_defaultFlag] &&
|
||||
this[_flags] === 'r+' &&
|
||||
er &&
|
||||
er.code === 'ENOENT') {
|
||||
this[_flags] = 'w';
|
||||
this[_open]();
|
||||
}
|
||||
else if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
this[_fd] = fd;
|
||||
this.emit('open', fd);
|
||||
if (!this[_writing]) {
|
||||
this[_flush]();
|
||||
}
|
||||
}
|
||||
}
|
||||
end(buf, enc) {
|
||||
if (buf) {
|
||||
//@ts-ignore
|
||||
this.write(buf, enc);
|
||||
}
|
||||
this[_ended] = true;
|
||||
// synthetic after-write logic, where drain/finish live
|
||||
if (!this[_writing] &&
|
||||
!this[_queue].length &&
|
||||
typeof this[_fd] === 'number') {
|
||||
this[_onwrite](null, 0);
|
||||
}
|
||||
return this;
|
||||
}
|
||||
write(buf, enc) {
|
||||
if (typeof buf === 'string') {
|
||||
buf = Buffer.from(buf, enc);
|
||||
}
|
||||
if (this[_ended]) {
|
||||
this.emit('error', new Error('write() after end()'));
|
||||
return false;
|
||||
}
|
||||
if (this[_fd] === undefined || this[_writing] || this[_queue].length) {
|
||||
this[_queue].push(buf);
|
||||
this[_needDrain] = true;
|
||||
return false;
|
||||
}
|
||||
this[_writing] = true;
|
||||
this[_write](buf);
|
||||
return true;
|
||||
}
|
||||
[_write](buf) {
|
||||
fs.write(this[_fd], buf, 0, buf.length, this[_pos], (er, bw) => this[_onwrite](er, bw));
|
||||
}
|
||||
[_onwrite](er, bw) {
|
||||
if (er) {
|
||||
this[_onerror](er);
|
||||
}
|
||||
else {
|
||||
if (this[_pos] !== undefined && typeof bw === 'number') {
|
||||
this[_pos] += bw;
|
||||
}
|
||||
if (this[_queue].length) {
|
||||
this[_flush]();
|
||||
}
|
||||
else {
|
||||
this[_writing] = false;
|
||||
if (this[_ended] && !this[_finished]) {
|
||||
this[_finished] = true;
|
||||
this[_close]();
|
||||
this.emit('finish');
|
||||
}
|
||||
else if (this[_needDrain]) {
|
||||
this[_needDrain] = false;
|
||||
this.emit('drain');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
[_flush]() {
|
||||
if (this[_queue].length === 0) {
|
||||
if (this[_ended]) {
|
||||
this[_onwrite](null, 0);
|
||||
}
|
||||
}
|
||||
else if (this[_queue].length === 1) {
|
||||
this[_write](this[_queue].pop());
|
||||
}
|
||||
else {
|
||||
const iovec = this[_queue];
|
||||
this[_queue] = [];
|
||||
writev(this[_fd], iovec, this[_pos], (er, bw) => this[_onwrite](er, bw));
|
||||
}
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs.close(fd, er => er ? this.emit('error', er) : this.emit('close'));
|
||||
}
|
||||
}
|
||||
}
|
||||
export class WriteStreamSync extends WriteStream {
|
||||
[_open]() {
|
||||
let fd;
|
||||
// only wrap in a try{} block if we know we'll retry, to avoid
|
||||
// the rethrow obscuring the error's source frame in most cases.
|
||||
if (this[_defaultFlag] && this[_flags] === 'r+') {
|
||||
try {
|
||||
fd = fs.openSync(this[_path], this[_flags], this[_mode]);
|
||||
}
|
||||
catch (er) {
|
||||
if (er?.code === 'ENOENT') {
|
||||
this[_flags] = 'w';
|
||||
return this[_open]();
|
||||
}
|
||||
else {
|
||||
throw er;
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
fd = fs.openSync(this[_path], this[_flags], this[_mode]);
|
||||
}
|
||||
this[_onopen](null, fd);
|
||||
}
|
||||
[_close]() {
|
||||
if (this[_autoClose] && typeof this[_fd] === 'number') {
|
||||
const fd = this[_fd];
|
||||
this[_fd] = undefined;
|
||||
fs.closeSync(fd);
|
||||
this.emit('close');
|
||||
}
|
||||
}
|
||||
[_write](buf) {
|
||||
// throw the original, but try to close if it fails
|
||||
let threw = true;
|
||||
try {
|
||||
this[_onwrite](null, fs.writeSync(this[_fd], buf, 0, buf.length, this[_pos]));
|
||||
threw = false;
|
||||
}
|
||||
finally {
|
||||
if (threw) {
|
||||
try {
|
||||
this[_close]();
|
||||
}
|
||||
catch {
|
||||
// ok error
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
//# sourceMappingURL=index.js.map
|
||||
1
node_modules/@isaacs/fs-minipass/dist/esm/index.js.map
generated
vendored
Normal file
1
node_modules/@isaacs/fs-minipass/dist/esm/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
node_modules/@isaacs/fs-minipass/dist/esm/package.json
generated
vendored
Normal file
3
node_modules/@isaacs/fs-minipass/dist/esm/package.json
generated
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "module"
|
||||
}
|
||||
72
node_modules/@isaacs/fs-minipass/package.json
generated
vendored
Normal file
72
node_modules/@isaacs/fs-minipass/package.json
generated
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
{
|
||||
"name": "@isaacs/fs-minipass",
|
||||
"version": "4.0.1",
|
||||
"main": "./dist/commonjs/index.js",
|
||||
"scripts": {
|
||||
"prepare": "tshy",
|
||||
"pretest": "npm run prepare",
|
||||
"test": "tap",
|
||||
"preversion": "npm test",
|
||||
"postversion": "npm publish",
|
||||
"prepublishOnly": "git push origin --follow-tags",
|
||||
"format": "prettier --write . --loglevel warn",
|
||||
"typedoc": "typedoc --tsconfig .tshy/esm.json ./src/*.ts"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "Isaac Z. Schlueter",
|
||||
"license": "ISC",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/npm/fs-minipass.git"
|
||||
},
|
||||
"description": "fs read and write streams based on minipass",
|
||||
"dependencies": {
|
||||
"minipass": "^7.0.4"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.11.30",
|
||||
"mutate-fs": "^2.1.1",
|
||||
"prettier": "^3.2.5",
|
||||
"tap": "^18.7.1",
|
||||
"tshy": "^1.12.0",
|
||||
"typedoc": "^0.25.12"
|
||||
},
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"tshy": {
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": "./src/index.ts"
|
||||
}
|
||||
},
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.js"
|
||||
}
|
||||
}
|
||||
},
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"type": "module",
|
||||
"prettier": {
|
||||
"semi": false,
|
||||
"printWidth": 75,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"singleQuote": true,
|
||||
"jsxSingleQuote": false,
|
||||
"bracketSameLine": true,
|
||||
"arrowParens": "avoid",
|
||||
"endOfLine": "lf"
|
||||
}
|
||||
}
|
||||
40
node_modules/@npmcli/agent/README.md
generated
vendored
Normal file
40
node_modules/@npmcli/agent/README.md
generated
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
## @npmcli/agent
|
||||
|
||||
A pair of Agent implementations for nodejs that provide consistent keep-alives, granular timeouts, dns caching, and proxy support.
|
||||
|
||||
### Usage
|
||||
|
||||
```js
|
||||
const { getAgent, HttpAgent } = require('@npmcli/agent')
|
||||
const fetch = require('minipass-fetch')
|
||||
|
||||
const main = async () => {
|
||||
// if you know what agent you need, you can create one directly
|
||||
const agent = new HttpAgent(agentOptions)
|
||||
// or you can use the getAgent helper, it will determine and create an Agent
|
||||
// instance for you as well as reuse that agent for new requests as appropriate
|
||||
const agent = getAgent('https://registry.npmjs.org/npm', agentOptions)
|
||||
// minipass-fetch is just an example, this will work for any http client that
|
||||
// supports node's Agents
|
||||
const res = await fetch('https://registry.npmjs.org/npm', { agent })
|
||||
}
|
||||
|
||||
main()
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
All options supported by the node Agent implementations are supported here, see [the docs](https://nodejs.org/api/http.html#new-agentoptions) for those.
|
||||
|
||||
Options that have been added by this module include:
|
||||
|
||||
- `family`: what tcp family to use, can be `4` for IPv4, `6` for IPv6 or `0` for both.
|
||||
- `proxy`: a URL to a supported proxy, currently supports `HTTP CONNECT` based http/https proxies as well as socks4 and 5.
|
||||
- `dns`: configuration for the built-in dns cache
|
||||
- `ttl`: how long (in milliseconds) to keep cached dns entries, defaults to `5 * 60 * 100 (5 minutes)`
|
||||
- `lookup`: optional function to override how dns lookups are performed, defaults to `require('dns').lookup`
|
||||
- `timeouts`: a set of granular timeouts, all default to `0`
|
||||
- `connection`: time between initiating connection and actually connecting
|
||||
- `idle`: time between data packets (if a top level `timeout` is provided, it will be copied here)
|
||||
- `response`: time between sending a request and receiving a response
|
||||
- `transfer`: time between starting to receive a request and consuming the response fully
|
||||
206
node_modules/@npmcli/agent/lib/agents.js
generated
vendored
Normal file
206
node_modules/@npmcli/agent/lib/agents.js
generated
vendored
Normal file
@ -0,0 +1,206 @@
|
||||
'use strict'
|
||||
|
||||
const net = require('net')
|
||||
const tls = require('tls')
|
||||
const { once } = require('events')
|
||||
const timers = require('timers/promises')
|
||||
const { normalizeOptions, cacheOptions } = require('./options')
|
||||
const { getProxy, getProxyAgent, proxyCache } = require('./proxy.js')
|
||||
const Errors = require('./errors.js')
|
||||
const { Agent: AgentBase } = require('agent-base')
|
||||
|
||||
module.exports = class Agent extends AgentBase {
|
||||
#options
|
||||
#timeouts
|
||||
#proxy
|
||||
#noProxy
|
||||
#ProxyAgent
|
||||
|
||||
constructor (options = {}) {
|
||||
const { timeouts, proxy, noProxy, ...normalizedOptions } = normalizeOptions(options)
|
||||
|
||||
super(normalizedOptions)
|
||||
|
||||
this.#options = normalizedOptions
|
||||
this.#timeouts = timeouts
|
||||
|
||||
if (proxy) {
|
||||
this.#proxy = new URL(proxy)
|
||||
this.#noProxy = noProxy
|
||||
this.#ProxyAgent = getProxyAgent(proxy)
|
||||
}
|
||||
}
|
||||
|
||||
get proxy () {
|
||||
return this.#proxy ? { url: this.#proxy } : {}
|
||||
}
|
||||
|
||||
#getProxy (options) {
|
||||
if (!this.#proxy) {
|
||||
return
|
||||
}
|
||||
|
||||
const proxy = getProxy(`${options.protocol}//${options.host}:${options.port}`, {
|
||||
proxy: this.#proxy,
|
||||
noProxy: this.#noProxy,
|
||||
})
|
||||
|
||||
if (!proxy) {
|
||||
return
|
||||
}
|
||||
|
||||
const cacheKey = cacheOptions({
|
||||
...options,
|
||||
...this.#options,
|
||||
timeouts: this.#timeouts,
|
||||
proxy,
|
||||
})
|
||||
|
||||
if (proxyCache.has(cacheKey)) {
|
||||
return proxyCache.get(cacheKey)
|
||||
}
|
||||
|
||||
let ProxyAgent = this.#ProxyAgent
|
||||
if (Array.isArray(ProxyAgent)) {
|
||||
ProxyAgent = this.isSecureEndpoint(options) ? ProxyAgent[1] : ProxyAgent[0]
|
||||
}
|
||||
|
||||
const proxyAgent = new ProxyAgent(proxy, {
|
||||
...this.#options,
|
||||
socketOptions: { family: this.#options.family },
|
||||
})
|
||||
proxyCache.set(cacheKey, proxyAgent)
|
||||
|
||||
return proxyAgent
|
||||
}
|
||||
|
||||
// takes an array of promises and races them against the connection timeout
|
||||
// which will throw the necessary error if it is hit. This will return the
|
||||
// result of the promise race.
|
||||
async #timeoutConnection ({ promises, options, timeout }, ac = new AbortController()) {
|
||||
if (timeout) {
|
||||
const connectionTimeout = timers.setTimeout(timeout, null, { signal: ac.signal })
|
||||
.then(() => {
|
||||
throw new Errors.ConnectionTimeoutError(`${options.host}:${options.port}`)
|
||||
}).catch((err) => {
|
||||
if (err.name === 'AbortError') {
|
||||
return
|
||||
}
|
||||
throw err
|
||||
})
|
||||
promises.push(connectionTimeout)
|
||||
}
|
||||
|
||||
let result
|
||||
try {
|
||||
result = await Promise.race(promises)
|
||||
ac.abort()
|
||||
} catch (err) {
|
||||
ac.abort()
|
||||
throw err
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
async connect (request, options) {
|
||||
// if the connection does not have its own lookup function
|
||||
// set, then use the one from our options
|
||||
options.lookup ??= this.#options.lookup
|
||||
|
||||
let socket
|
||||
let timeout = this.#timeouts.connection
|
||||
const isSecureEndpoint = this.isSecureEndpoint(options)
|
||||
|
||||
const proxy = this.#getProxy(options)
|
||||
if (proxy) {
|
||||
// some of the proxies will wait for the socket to fully connect before
|
||||
// returning so we have to await this while also racing it against the
|
||||
// connection timeout.
|
||||
const start = Date.now()
|
||||
socket = await this.#timeoutConnection({
|
||||
options,
|
||||
timeout,
|
||||
promises: [proxy.connect(request, options)],
|
||||
})
|
||||
// see how much time proxy.connect took and subtract it from
|
||||
// the timeout
|
||||
if (timeout) {
|
||||
timeout = timeout - (Date.now() - start)
|
||||
}
|
||||
} else {
|
||||
socket = (isSecureEndpoint ? tls : net).connect(options)
|
||||
}
|
||||
|
||||
socket.setKeepAlive(this.keepAlive, this.keepAliveMsecs)
|
||||
socket.setNoDelay(this.keepAlive)
|
||||
|
||||
const abortController = new AbortController()
|
||||
const { signal } = abortController
|
||||
|
||||
const connectPromise = socket[isSecureEndpoint ? 'secureConnecting' : 'connecting']
|
||||
? once(socket, isSecureEndpoint ? 'secureConnect' : 'connect', { signal })
|
||||
: Promise.resolve()
|
||||
|
||||
await this.#timeoutConnection({
|
||||
options,
|
||||
timeout,
|
||||
promises: [
|
||||
connectPromise,
|
||||
once(socket, 'error', { signal }).then((err) => {
|
||||
throw err[0]
|
||||
}),
|
||||
],
|
||||
}, abortController)
|
||||
|
||||
if (this.#timeouts.idle) {
|
||||
socket.setTimeout(this.#timeouts.idle, () => {
|
||||
socket.destroy(new Errors.IdleTimeoutError(`${options.host}:${options.port}`))
|
||||
})
|
||||
}
|
||||
|
||||
return socket
|
||||
}
|
||||
|
||||
addRequest (request, options) {
|
||||
const proxy = this.#getProxy(options)
|
||||
// it would be better to call proxy.addRequest here but this causes the
|
||||
// http-proxy-agent to call its super.addRequest which causes the request
|
||||
// to be added to the agent twice. since we only support 3 agents
|
||||
// currently (see the required agents in proxy.js) we have manually
|
||||
// checked that the only public methods we need to call are called in the
|
||||
// next block. this could change in the future and presumably we would get
|
||||
// failing tests until we have properly called the necessary methods on
|
||||
// each of our proxy agents
|
||||
if (proxy?.setRequestProps) {
|
||||
proxy.setRequestProps(request, options)
|
||||
}
|
||||
|
||||
request.setHeader('connection', this.keepAlive ? 'keep-alive' : 'close')
|
||||
|
||||
if (this.#timeouts.response) {
|
||||
let responseTimeout
|
||||
request.once('finish', () => {
|
||||
setTimeout(() => {
|
||||
request.destroy(new Errors.ResponseTimeoutError(request, this.#proxy))
|
||||
}, this.#timeouts.response)
|
||||
})
|
||||
request.once('response', () => {
|
||||
clearTimeout(responseTimeout)
|
||||
})
|
||||
}
|
||||
|
||||
if (this.#timeouts.transfer) {
|
||||
let transferTimeout
|
||||
request.once('response', (res) => {
|
||||
setTimeout(() => {
|
||||
res.destroy(new Errors.TransferTimeoutError(request, this.#proxy))
|
||||
}, this.#timeouts.transfer)
|
||||
res.once('close', () => {
|
||||
clearTimeout(transferTimeout)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
return super.addRequest(request, options)
|
||||
}
|
||||
}
|
||||
53
node_modules/@npmcli/agent/lib/dns.js
generated
vendored
Normal file
53
node_modules/@npmcli/agent/lib/dns.js
generated
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
'use strict'
|
||||
|
||||
const { LRUCache } = require('lru-cache')
|
||||
const dns = require('dns')
|
||||
|
||||
// this is a factory so that each request can have its own opts (i.e. ttl)
|
||||
// while still sharing the cache across all requests
|
||||
const cache = new LRUCache({ max: 50 })
|
||||
|
||||
const getOptions = ({
|
||||
family = 0,
|
||||
hints = dns.ADDRCONFIG,
|
||||
all = false,
|
||||
verbatim = undefined,
|
||||
ttl = 5 * 60 * 1000,
|
||||
lookup = dns.lookup,
|
||||
}) => ({
|
||||
// hints and lookup are returned since both are top level properties to (net|tls).connect
|
||||
hints,
|
||||
lookup: (hostname, ...args) => {
|
||||
const callback = args.pop() // callback is always last arg
|
||||
const lookupOptions = args[0] ?? {}
|
||||
|
||||
const options = {
|
||||
family,
|
||||
hints,
|
||||
all,
|
||||
verbatim,
|
||||
...(typeof lookupOptions === 'number' ? { family: lookupOptions } : lookupOptions),
|
||||
}
|
||||
|
||||
const key = JSON.stringify({ hostname, ...options })
|
||||
|
||||
if (cache.has(key)) {
|
||||
const cached = cache.get(key)
|
||||
return process.nextTick(callback, null, ...cached)
|
||||
}
|
||||
|
||||
lookup(hostname, options, (err, ...result) => {
|
||||
if (err) {
|
||||
return callback(err)
|
||||
}
|
||||
|
||||
cache.set(key, result, { ttl })
|
||||
return callback(null, ...result)
|
||||
})
|
||||
},
|
||||
})
|
||||
|
||||
module.exports = {
|
||||
cache,
|
||||
getOptions,
|
||||
}
|
||||
61
node_modules/@npmcli/agent/lib/errors.js
generated
vendored
Normal file
61
node_modules/@npmcli/agent/lib/errors.js
generated
vendored
Normal file
@ -0,0 +1,61 @@
|
||||
'use strict'
|
||||
|
||||
class InvalidProxyProtocolError extends Error {
|
||||
constructor (url) {
|
||||
super(`Invalid protocol \`${url.protocol}\` connecting to proxy \`${url.host}\``)
|
||||
this.code = 'EINVALIDPROXY'
|
||||
this.proxy = url
|
||||
}
|
||||
}
|
||||
|
||||
class ConnectionTimeoutError extends Error {
|
||||
constructor (host) {
|
||||
super(`Timeout connecting to host \`${host}\``)
|
||||
this.code = 'ECONNECTIONTIMEOUT'
|
||||
this.host = host
|
||||
}
|
||||
}
|
||||
|
||||
class IdleTimeoutError extends Error {
|
||||
constructor (host) {
|
||||
super(`Idle timeout reached for host \`${host}\``)
|
||||
this.code = 'EIDLETIMEOUT'
|
||||
this.host = host
|
||||
}
|
||||
}
|
||||
|
||||
class ResponseTimeoutError extends Error {
|
||||
constructor (request, proxy) {
|
||||
let msg = 'Response timeout '
|
||||
if (proxy) {
|
||||
msg += `from proxy \`${proxy.host}\` `
|
||||
}
|
||||
msg += `connecting to host \`${request.host}\``
|
||||
super(msg)
|
||||
this.code = 'ERESPONSETIMEOUT'
|
||||
this.proxy = proxy
|
||||
this.request = request
|
||||
}
|
||||
}
|
||||
|
||||
class TransferTimeoutError extends Error {
|
||||
constructor (request, proxy) {
|
||||
let msg = 'Transfer timeout '
|
||||
if (proxy) {
|
||||
msg += `from proxy \`${proxy.host}\` `
|
||||
}
|
||||
msg += `for \`${request.host}\``
|
||||
super(msg)
|
||||
this.code = 'ETRANSFERTIMEOUT'
|
||||
this.proxy = proxy
|
||||
this.request = request
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
InvalidProxyProtocolError,
|
||||
ConnectionTimeoutError,
|
||||
IdleTimeoutError,
|
||||
ResponseTimeoutError,
|
||||
TransferTimeoutError,
|
||||
}
|
||||
56
node_modules/@npmcli/agent/lib/index.js
generated
vendored
Normal file
56
node_modules/@npmcli/agent/lib/index.js
generated
vendored
Normal file
@ -0,0 +1,56 @@
|
||||
'use strict'
|
||||
|
||||
const { LRUCache } = require('lru-cache')
|
||||
const { normalizeOptions, cacheOptions } = require('./options')
|
||||
const { getProxy, proxyCache } = require('./proxy.js')
|
||||
const dns = require('./dns.js')
|
||||
const Agent = require('./agents.js')
|
||||
|
||||
const agentCache = new LRUCache({ max: 20 })
|
||||
|
||||
const getAgent = (url, { agent, proxy, noProxy, ...options } = {}) => {
|
||||
// false has meaning so this can't be a simple truthiness check
|
||||
if (agent != null) {
|
||||
return agent
|
||||
}
|
||||
|
||||
url = new URL(url)
|
||||
|
||||
const proxyForUrl = getProxy(url, { proxy, noProxy })
|
||||
const normalizedOptions = {
|
||||
...normalizeOptions(options),
|
||||
proxy: proxyForUrl,
|
||||
}
|
||||
|
||||
const cacheKey = cacheOptions({
|
||||
...normalizedOptions,
|
||||
secureEndpoint: url.protocol === 'https:',
|
||||
})
|
||||
|
||||
if (agentCache.has(cacheKey)) {
|
||||
return agentCache.get(cacheKey)
|
||||
}
|
||||
|
||||
const newAgent = new Agent(normalizedOptions)
|
||||
agentCache.set(cacheKey, newAgent)
|
||||
|
||||
return newAgent
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
getAgent,
|
||||
Agent,
|
||||
// these are exported for backwards compatability
|
||||
HttpAgent: Agent,
|
||||
HttpsAgent: Agent,
|
||||
cache: {
|
||||
proxy: proxyCache,
|
||||
agent: agentCache,
|
||||
dns: dns.cache,
|
||||
clear: () => {
|
||||
proxyCache.clear()
|
||||
agentCache.clear()
|
||||
dns.cache.clear()
|
||||
},
|
||||
},
|
||||
}
|
||||
86
node_modules/@npmcli/agent/lib/options.js
generated
vendored
Normal file
86
node_modules/@npmcli/agent/lib/options.js
generated
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
'use strict'
|
||||
|
||||
const dns = require('./dns')
|
||||
|
||||
const normalizeOptions = (opts) => {
|
||||
const family = parseInt(opts.family ?? '0', 10)
|
||||
const keepAlive = opts.keepAlive ?? true
|
||||
|
||||
const normalized = {
|
||||
// nodejs http agent options. these are all the defaults
|
||||
// but kept here to increase the likelihood of cache hits
|
||||
// https://nodejs.org/api/http.html#new-agentoptions
|
||||
keepAliveMsecs: keepAlive ? 1000 : undefined,
|
||||
maxSockets: opts.maxSockets ?? 15,
|
||||
maxTotalSockets: Infinity,
|
||||
maxFreeSockets: keepAlive ? 256 : undefined,
|
||||
scheduling: 'fifo',
|
||||
// then spread the rest of the options
|
||||
...opts,
|
||||
// we already set these to their defaults that we want
|
||||
family,
|
||||
keepAlive,
|
||||
// our custom timeout options
|
||||
timeouts: {
|
||||
// the standard timeout option is mapped to our idle timeout
|
||||
// and then deleted below
|
||||
idle: opts.timeout ?? 0,
|
||||
connection: 0,
|
||||
response: 0,
|
||||
transfer: 0,
|
||||
...opts.timeouts,
|
||||
},
|
||||
// get the dns options that go at the top level of socket connection
|
||||
...dns.getOptions({ family, ...opts.dns }),
|
||||
}
|
||||
|
||||
// remove timeout since we already used it to set our own idle timeout
|
||||
delete normalized.timeout
|
||||
|
||||
return normalized
|
||||
}
|
||||
|
||||
const createKey = (obj) => {
|
||||
let key = ''
|
||||
const sorted = Object.entries(obj).sort((a, b) => a[0] - b[0])
|
||||
for (let [k, v] of sorted) {
|
||||
if (v == null) {
|
||||
v = 'null'
|
||||
} else if (v instanceof URL) {
|
||||
v = v.toString()
|
||||
} else if (typeof v === 'object') {
|
||||
v = createKey(v)
|
||||
}
|
||||
key += `${k}:${v}:`
|
||||
}
|
||||
return key
|
||||
}
|
||||
|
||||
const cacheOptions = ({ secureEndpoint, ...options }) => createKey({
|
||||
secureEndpoint: !!secureEndpoint,
|
||||
// socket connect options
|
||||
family: options.family,
|
||||
hints: options.hints,
|
||||
localAddress: options.localAddress,
|
||||
// tls specific connect options
|
||||
strictSsl: secureEndpoint ? !!options.rejectUnauthorized : false,
|
||||
ca: secureEndpoint ? options.ca : null,
|
||||
cert: secureEndpoint ? options.cert : null,
|
||||
key: secureEndpoint ? options.key : null,
|
||||
// http agent options
|
||||
keepAlive: options.keepAlive,
|
||||
keepAliveMsecs: options.keepAliveMsecs,
|
||||
maxSockets: options.maxSockets,
|
||||
maxTotalSockets: options.maxTotalSockets,
|
||||
maxFreeSockets: options.maxFreeSockets,
|
||||
scheduling: options.scheduling,
|
||||
// timeout options
|
||||
timeouts: options.timeouts,
|
||||
// proxy
|
||||
proxy: options.proxy,
|
||||
})
|
||||
|
||||
module.exports = {
|
||||
normalizeOptions,
|
||||
cacheOptions,
|
||||
}
|
||||
88
node_modules/@npmcli/agent/lib/proxy.js
generated
vendored
Normal file
88
node_modules/@npmcli/agent/lib/proxy.js
generated
vendored
Normal file
@ -0,0 +1,88 @@
|
||||
'use strict'
|
||||
|
||||
const { HttpProxyAgent } = require('http-proxy-agent')
|
||||
const { HttpsProxyAgent } = require('https-proxy-agent')
|
||||
const { SocksProxyAgent } = require('socks-proxy-agent')
|
||||
const { LRUCache } = require('lru-cache')
|
||||
const { InvalidProxyProtocolError } = require('./errors.js')
|
||||
|
||||
const PROXY_CACHE = new LRUCache({ max: 20 })
|
||||
|
||||
const SOCKS_PROTOCOLS = new Set(SocksProxyAgent.protocols)
|
||||
|
||||
const PROXY_ENV_KEYS = new Set(['https_proxy', 'http_proxy', 'proxy', 'no_proxy'])
|
||||
|
||||
const PROXY_ENV = Object.entries(process.env).reduce((acc, [key, value]) => {
|
||||
key = key.toLowerCase()
|
||||
if (PROXY_ENV_KEYS.has(key)) {
|
||||
acc[key] = value
|
||||
}
|
||||
return acc
|
||||
}, {})
|
||||
|
||||
const getProxyAgent = (url) => {
|
||||
url = new URL(url)
|
||||
|
||||
const protocol = url.protocol.slice(0, -1)
|
||||
if (SOCKS_PROTOCOLS.has(protocol)) {
|
||||
return SocksProxyAgent
|
||||
}
|
||||
if (protocol === 'https' || protocol === 'http') {
|
||||
return [HttpProxyAgent, HttpsProxyAgent]
|
||||
}
|
||||
|
||||
throw new InvalidProxyProtocolError(url)
|
||||
}
|
||||
|
||||
const isNoProxy = (url, noProxy) => {
|
||||
if (typeof noProxy === 'string') {
|
||||
noProxy = noProxy.split(',').map((p) => p.trim()).filter(Boolean)
|
||||
}
|
||||
|
||||
if (!noProxy || !noProxy.length) {
|
||||
return false
|
||||
}
|
||||
|
||||
const hostSegments = url.hostname.split('.').reverse()
|
||||
|
||||
return noProxy.some((no) => {
|
||||
const noSegments = no.split('.').filter(Boolean).reverse()
|
||||
if (!noSegments.length) {
|
||||
return false
|
||||
}
|
||||
|
||||
for (let i = 0; i < noSegments.length; i++) {
|
||||
if (hostSegments[i] !== noSegments[i]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
const getProxy = (url, { proxy, noProxy }) => {
|
||||
url = new URL(url)
|
||||
|
||||
if (!proxy) {
|
||||
proxy = url.protocol === 'https:'
|
||||
? PROXY_ENV.https_proxy
|
||||
: PROXY_ENV.https_proxy || PROXY_ENV.http_proxy || PROXY_ENV.proxy
|
||||
}
|
||||
|
||||
if (!noProxy) {
|
||||
noProxy = PROXY_ENV.no_proxy
|
||||
}
|
||||
|
||||
if (!proxy || isNoProxy(url, noProxy)) {
|
||||
return null
|
||||
}
|
||||
|
||||
return new URL(proxy)
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
getProxyAgent,
|
||||
getProxy,
|
||||
proxyCache: PROXY_CACHE,
|
||||
}
|
||||
60
node_modules/@npmcli/agent/package.json
generated
vendored
Normal file
60
node_modules/@npmcli/agent/package.json
generated
vendored
Normal file
@ -0,0 +1,60 @@
|
||||
{
|
||||
"name": "@npmcli/agent",
|
||||
"version": "4.0.0",
|
||||
"description": "the http/https agent used by the npm cli",
|
||||
"main": "lib/index.js",
|
||||
"scripts": {
|
||||
"gencerts": "bash scripts/create-cert.sh",
|
||||
"test": "tap",
|
||||
"lint": "npm run eslint",
|
||||
"postlint": "template-oss-check",
|
||||
"template-oss-apply": "template-oss-apply --force",
|
||||
"lintfix": "npm run eslint -- --fix",
|
||||
"snap": "tap",
|
||||
"posttest": "npm run lint",
|
||||
"eslint": "eslint \"**/*.{js,cjs,ts,mjs,jsx,tsx}\""
|
||||
},
|
||||
"author": "GitHub Inc.",
|
||||
"license": "ISC",
|
||||
"bugs": {
|
||||
"url": "https://github.com/npm/agent/issues"
|
||||
},
|
||||
"homepage": "https://github.com/npm/agent#readme",
|
||||
"files": [
|
||||
"bin/",
|
||||
"lib/"
|
||||
],
|
||||
"engines": {
|
||||
"node": "^20.17.0 || >=22.9.0"
|
||||
},
|
||||
"templateOSS": {
|
||||
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
|
||||
"version": "4.25.0",
|
||||
"publish": "true"
|
||||
},
|
||||
"dependencies": {
|
||||
"agent-base": "^7.1.0",
|
||||
"http-proxy-agent": "^7.0.0",
|
||||
"https-proxy-agent": "^7.0.1",
|
||||
"lru-cache": "^11.2.1",
|
||||
"socks-proxy-agent": "^8.0.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@npmcli/eslint-config": "^5.0.0",
|
||||
"@npmcli/template-oss": "4.25.0",
|
||||
"minipass-fetch": "^4.0.1",
|
||||
"nock": "^14.0.3",
|
||||
"socksv5": "^0.0.6",
|
||||
"tap": "^16.3.0"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/npm/agent.git"
|
||||
},
|
||||
"tap": {
|
||||
"nyc-arg": [
|
||||
"--exclude",
|
||||
"tap-snapshots/**"
|
||||
]
|
||||
}
|
||||
}
|
||||
20
node_modules/@npmcli/fs/LICENSE.md
generated
vendored
Normal file
20
node_modules/@npmcli/fs/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
<!-- This file is automatically added by @npmcli/template-oss. Do not edit. -->
|
||||
|
||||
ISC License
|
||||
|
||||
Copyright npm, Inc.
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this
|
||||
software for any purpose with or without fee is hereby
|
||||
granted, provided that the above copyright notice and this
|
||||
permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND NPM DISCLAIMS ALL
|
||||
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO
|
||||
EVENT SHALL NPM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
|
||||
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
|
||||
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
|
||||
USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
97
node_modules/@npmcli/fs/README.md
generated
vendored
Normal file
97
node_modules/@npmcli/fs/README.md
generated
vendored
Normal file
@ -0,0 +1,97 @@
|
||||
# @npmcli/fs
|
||||
|
||||
polyfills, and extensions, of the core `fs` module.
|
||||
|
||||
## Features
|
||||
|
||||
- `fs.cp` polyfill for node < 16.7.0
|
||||
- `fs.withTempDir` added
|
||||
- `fs.readdirScoped` added
|
||||
- `fs.moveFile` added
|
||||
|
||||
## `fs.withTempDir(root, fn, options) -> Promise`
|
||||
|
||||
### Parameters
|
||||
|
||||
- `root`: the directory in which to create the temporary directory
|
||||
- `fn`: a function that will be called with the path to the temporary directory
|
||||
- `options`
|
||||
- `tmpPrefix`: a prefix to be used in the generated directory name
|
||||
|
||||
### Usage
|
||||
|
||||
The `withTempDir` function creates a temporary directory, runs the provided
|
||||
function (`fn`), then removes the temporary directory and resolves or rejects
|
||||
based on the result of `fn`.
|
||||
|
||||
```js
|
||||
const fs = require('@npmcli/fs')
|
||||
const os = require('os')
|
||||
|
||||
// this function will be called with the full path to the temporary directory
|
||||
// it is called with `await` behind the scenes, so can be async if desired.
|
||||
const myFunction = async (tempPath) => {
|
||||
return 'done!'
|
||||
}
|
||||
|
||||
const main = async () => {
|
||||
const result = await fs.withTempDir(os.tmpdir(), myFunction)
|
||||
// result === 'done!'
|
||||
}
|
||||
|
||||
main()
|
||||
```
|
||||
|
||||
## `fs.readdirScoped(root) -> Promise`
|
||||
|
||||
### Parameters
|
||||
|
||||
- `root`: the directory to read
|
||||
|
||||
### Usage
|
||||
|
||||
Like `fs.readdir` but handling `@org/module` dirs as if they were
|
||||
a single entry.
|
||||
|
||||
```javascript
|
||||
const { readdirScoped } = require('@npmcli/fs')
|
||||
const entries = await readdirScoped('node_modules')
|
||||
// entries will be something like: ['a', '@org/foo', '@org/bar']
|
||||
```
|
||||
|
||||
## `fs.moveFile(source, dest, options) -> Promise`
|
||||
|
||||
A fork of [move-file](https://github.com/sindresorhus/move-file) with
|
||||
support for Common JS.
|
||||
|
||||
### Highlights
|
||||
|
||||
- Promise API.
|
||||
- Supports moving a file across partitions and devices.
|
||||
- Optionally prevent overwriting an existing file.
|
||||
- Creates non-existent destination directories for you.
|
||||
- Automatically recurses when source is a directory.
|
||||
|
||||
### Parameters
|
||||
|
||||
- `source`: File, or directory, you want to move.
|
||||
- `dest`: Where you want the file or directory moved.
|
||||
- `options`
|
||||
- `overwrite` (`boolean`, default: `true`): Overwrite existing destination file(s).
|
||||
|
||||
### Usage
|
||||
|
||||
The built-in
|
||||
[`fs.rename()`](https://nodejs.org/api/fs.html#fs_fs_rename_oldpath_newpath_callback)
|
||||
is just a JavaScript wrapper for the C `rename(2)` function, which doesn't
|
||||
support moving files across partitions or devices. This module is what you
|
||||
would have expected `fs.rename()` to be.
|
||||
|
||||
```js
|
||||
const { moveFile } = require('@npmcli/fs');
|
||||
|
||||
(async () => {
|
||||
await moveFile('source/unicorn.png', 'destination/unicorn.png');
|
||||
console.log('The file has been moved');
|
||||
})();
|
||||
```
|
||||
20
node_modules/@npmcli/fs/lib/common/get-options.js
generated
vendored
Normal file
20
node_modules/@npmcli/fs/lib/common/get-options.js
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
// given an input that may or may not be an object, return an object that has
|
||||
// a copy of every defined property listed in 'copy'. if the input is not an
|
||||
// object, assign it to the property named by 'wrap'
|
||||
const getOptions = (input, { copy, wrap }) => {
|
||||
const result = {}
|
||||
|
||||
if (input && typeof input === 'object') {
|
||||
for (const prop of copy) {
|
||||
if (input[prop] !== undefined) {
|
||||
result[prop] = input[prop]
|
||||
}
|
||||
}
|
||||
} else {
|
||||
result[wrap] = input
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
module.exports = getOptions
|
||||
9
node_modules/@npmcli/fs/lib/common/node.js
generated
vendored
Normal file
9
node_modules/@npmcli/fs/lib/common/node.js
generated
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
const semver = require('semver')
|
||||
|
||||
const satisfies = (range) => {
|
||||
return semver.satisfies(process.version, range, { includePrerelease: true })
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
satisfies,
|
||||
}
|
||||
15
node_modules/@npmcli/fs/lib/cp/LICENSE
generated
vendored
Normal file
15
node_modules/@npmcli/fs/lib/cp/LICENSE
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
(The MIT License)
|
||||
|
||||
Copyright (c) 2011-2017 JP Richardson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files
|
||||
(the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify,
|
||||
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
|
||||
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
|
||||
OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
129
node_modules/@npmcli/fs/lib/cp/errors.js
generated
vendored
Normal file
129
node_modules/@npmcli/fs/lib/cp/errors.js
generated
vendored
Normal file
@ -0,0 +1,129 @@
|
||||
'use strict'
|
||||
const { inspect } = require('util')
|
||||
|
||||
// adapted from node's internal/errors
|
||||
// https://github.com/nodejs/node/blob/c8a04049/lib/internal/errors.js
|
||||
|
||||
// close copy of node's internal SystemError class.
|
||||
class SystemError {
|
||||
constructor (code, prefix, context) {
|
||||
// XXX context.code is undefined in all constructors used in cp/polyfill
|
||||
// that may be a bug copied from node, maybe the constructor should use
|
||||
// `code` not `errno`? nodejs/node#41104
|
||||
let message = `${prefix}: ${context.syscall} returned ` +
|
||||
`${context.code} (${context.message})`
|
||||
|
||||
if (context.path !== undefined) {
|
||||
message += ` ${context.path}`
|
||||
}
|
||||
if (context.dest !== undefined) {
|
||||
message += ` => ${context.dest}`
|
||||
}
|
||||
|
||||
this.code = code
|
||||
Object.defineProperties(this, {
|
||||
name: {
|
||||
value: 'SystemError',
|
||||
enumerable: false,
|
||||
writable: true,
|
||||
configurable: true,
|
||||
},
|
||||
message: {
|
||||
value: message,
|
||||
enumerable: false,
|
||||
writable: true,
|
||||
configurable: true,
|
||||
},
|
||||
info: {
|
||||
value: context,
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
writable: false,
|
||||
},
|
||||
errno: {
|
||||
get () {
|
||||
return context.errno
|
||||
},
|
||||
set (value) {
|
||||
context.errno = value
|
||||
},
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
},
|
||||
syscall: {
|
||||
get () {
|
||||
return context.syscall
|
||||
},
|
||||
set (value) {
|
||||
context.syscall = value
|
||||
},
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
},
|
||||
})
|
||||
|
||||
if (context.path !== undefined) {
|
||||
Object.defineProperty(this, 'path', {
|
||||
get () {
|
||||
return context.path
|
||||
},
|
||||
set (value) {
|
||||
context.path = value
|
||||
},
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
})
|
||||
}
|
||||
|
||||
if (context.dest !== undefined) {
|
||||
Object.defineProperty(this, 'dest', {
|
||||
get () {
|
||||
return context.dest
|
||||
},
|
||||
set (value) {
|
||||
context.dest = value
|
||||
},
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
toString () {
|
||||
return `${this.name} [${this.code}]: ${this.message}`
|
||||
}
|
||||
|
||||
[Symbol.for('nodejs.util.inspect.custom')] (_recurseTimes, ctx) {
|
||||
return inspect(this, {
|
||||
...ctx,
|
||||
getters: true,
|
||||
customInspect: false,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
function E (code, message) {
|
||||
module.exports[code] = class NodeError extends SystemError {
|
||||
constructor (ctx) {
|
||||
super(code, message, ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
E('ERR_FS_CP_DIR_TO_NON_DIR', 'Cannot overwrite directory with non-directory')
|
||||
E('ERR_FS_CP_EEXIST', 'Target already exists')
|
||||
E('ERR_FS_CP_EINVAL', 'Invalid src or dest')
|
||||
E('ERR_FS_CP_FIFO_PIPE', 'Cannot copy a FIFO pipe')
|
||||
E('ERR_FS_CP_NON_DIR_TO_DIR', 'Cannot overwrite non-directory with directory')
|
||||
E('ERR_FS_CP_SOCKET', 'Cannot copy a socket file')
|
||||
E('ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY', 'Cannot overwrite symlink in subdirectory of self')
|
||||
E('ERR_FS_CP_UNKNOWN', 'Cannot copy an unknown file type')
|
||||
E('ERR_FS_EISDIR', 'Path is a directory')
|
||||
|
||||
module.exports.ERR_INVALID_ARG_TYPE = class ERR_INVALID_ARG_TYPE extends Error {
|
||||
constructor (name, expected, actual) {
|
||||
super()
|
||||
this.code = 'ERR_INVALID_ARG_TYPE'
|
||||
this.message = `The ${name} argument must be ${expected}. Received ${typeof actual}`
|
||||
}
|
||||
}
|
||||
22
node_modules/@npmcli/fs/lib/cp/index.js
generated
vendored
Normal file
22
node_modules/@npmcli/fs/lib/cp/index.js
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
const fs = require('fs/promises')
|
||||
const getOptions = require('../common/get-options.js')
|
||||
const node = require('../common/node.js')
|
||||
const polyfill = require('./polyfill.js')
|
||||
|
||||
// node 16.7.0 added fs.cp
|
||||
const useNative = node.satisfies('>=16.7.0')
|
||||
|
||||
const cp = async (src, dest, opts) => {
|
||||
const options = getOptions(opts, {
|
||||
copy: ['dereference', 'errorOnExist', 'filter', 'force', 'preserveTimestamps', 'recursive'],
|
||||
})
|
||||
|
||||
// the polyfill is tested separately from this module, no need to hack
|
||||
// process.version to try to trigger it just for coverage
|
||||
// istanbul ignore next
|
||||
return useNative
|
||||
? fs.cp(src, dest, options)
|
||||
: polyfill(src, dest, options)
|
||||
}
|
||||
|
||||
module.exports = cp
|
||||
428
node_modules/@npmcli/fs/lib/cp/polyfill.js
generated
vendored
Normal file
428
node_modules/@npmcli/fs/lib/cp/polyfill.js
generated
vendored
Normal file
@ -0,0 +1,428 @@
|
||||
// this file is a modified version of the code in node 17.2.0
|
||||
// which is, in turn, a modified version of the fs-extra module on npm
|
||||
// node core changes:
|
||||
// - Use of the assert module has been replaced with core's error system.
|
||||
// - All code related to the glob dependency has been removed.
|
||||
// - Bring your own custom fs module is not currently supported.
|
||||
// - Some basic code cleanup.
|
||||
// changes here:
|
||||
// - remove all callback related code
|
||||
// - drop sync support
|
||||
// - change assertions back to non-internal methods (see options.js)
|
||||
// - throws ENOTDIR when rmdir gets an ENOENT for a path that exists in Windows
|
||||
'use strict'
|
||||
|
||||
const {
|
||||
ERR_FS_CP_DIR_TO_NON_DIR,
|
||||
ERR_FS_CP_EEXIST,
|
||||
ERR_FS_CP_EINVAL,
|
||||
ERR_FS_CP_FIFO_PIPE,
|
||||
ERR_FS_CP_NON_DIR_TO_DIR,
|
||||
ERR_FS_CP_SOCKET,
|
||||
ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY,
|
||||
ERR_FS_CP_UNKNOWN,
|
||||
ERR_FS_EISDIR,
|
||||
ERR_INVALID_ARG_TYPE,
|
||||
} = require('./errors.js')
|
||||
const {
|
||||
constants: {
|
||||
errno: {
|
||||
EEXIST,
|
||||
EISDIR,
|
||||
EINVAL,
|
||||
ENOTDIR,
|
||||
},
|
||||
},
|
||||
} = require('os')
|
||||
const {
|
||||
chmod,
|
||||
copyFile,
|
||||
lstat,
|
||||
mkdir,
|
||||
readdir,
|
||||
readlink,
|
||||
stat,
|
||||
symlink,
|
||||
unlink,
|
||||
utimes,
|
||||
} = require('fs/promises')
|
||||
const {
|
||||
dirname,
|
||||
isAbsolute,
|
||||
join,
|
||||
parse,
|
||||
resolve,
|
||||
sep,
|
||||
toNamespacedPath,
|
||||
} = require('path')
|
||||
const { fileURLToPath } = require('url')
|
||||
|
||||
const defaultOptions = {
|
||||
dereference: false,
|
||||
errorOnExist: false,
|
||||
filter: undefined,
|
||||
force: true,
|
||||
preserveTimestamps: false,
|
||||
recursive: false,
|
||||
}
|
||||
|
||||
async function cp (src, dest, opts) {
|
||||
if (opts != null && typeof opts !== 'object') {
|
||||
throw new ERR_INVALID_ARG_TYPE('options', ['Object'], opts)
|
||||
}
|
||||
return cpFn(
|
||||
toNamespacedPath(getValidatedPath(src)),
|
||||
toNamespacedPath(getValidatedPath(dest)),
|
||||
{ ...defaultOptions, ...opts })
|
||||
}
|
||||
|
||||
function getValidatedPath (fileURLOrPath) {
|
||||
const path = fileURLOrPath != null && fileURLOrPath.href
|
||||
&& fileURLOrPath.origin
|
||||
? fileURLToPath(fileURLOrPath)
|
||||
: fileURLOrPath
|
||||
return path
|
||||
}
|
||||
|
||||
async function cpFn (src, dest, opts) {
|
||||
// Warn about using preserveTimestamps on 32-bit node
|
||||
// istanbul ignore next
|
||||
if (opts.preserveTimestamps && process.arch === 'ia32') {
|
||||
const warning = 'Using the preserveTimestamps option in 32-bit ' +
|
||||
'node is not recommended'
|
||||
process.emitWarning(warning, 'TimestampPrecisionWarning')
|
||||
}
|
||||
const stats = await checkPaths(src, dest, opts)
|
||||
const { srcStat, destStat } = stats
|
||||
await checkParentPaths(src, srcStat, dest)
|
||||
if (opts.filter) {
|
||||
return handleFilter(checkParentDir, destStat, src, dest, opts)
|
||||
}
|
||||
return checkParentDir(destStat, src, dest, opts)
|
||||
}
|
||||
|
||||
async function checkPaths (src, dest, opts) {
|
||||
const { 0: srcStat, 1: destStat } = await getStats(src, dest, opts)
|
||||
if (destStat) {
|
||||
if (areIdentical(srcStat, destStat)) {
|
||||
throw new ERR_FS_CP_EINVAL({
|
||||
message: 'src and dest cannot be the same',
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
if (srcStat.isDirectory() && !destStat.isDirectory()) {
|
||||
throw new ERR_FS_CP_DIR_TO_NON_DIR({
|
||||
message: `cannot overwrite directory ${src} ` +
|
||||
`with non-directory ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EISDIR,
|
||||
})
|
||||
}
|
||||
if (!srcStat.isDirectory() && destStat.isDirectory()) {
|
||||
throw new ERR_FS_CP_NON_DIR_TO_DIR({
|
||||
message: `cannot overwrite non-directory ${src} ` +
|
||||
`with directory ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: ENOTDIR,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (srcStat.isDirectory() && isSrcSubdir(src, dest)) {
|
||||
throw new ERR_FS_CP_EINVAL({
|
||||
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
return { srcStat, destStat }
|
||||
}
|
||||
|
||||
function areIdentical (srcStat, destStat) {
|
||||
return destStat.ino && destStat.dev && destStat.ino === srcStat.ino &&
|
||||
destStat.dev === srcStat.dev
|
||||
}
|
||||
|
||||
function getStats (src, dest, opts) {
|
||||
const statFunc = opts.dereference ?
|
||||
(file) => stat(file, { bigint: true }) :
|
||||
(file) => lstat(file, { bigint: true })
|
||||
return Promise.all([
|
||||
statFunc(src),
|
||||
statFunc(dest).catch((err) => {
|
||||
// istanbul ignore next: unsure how to cover.
|
||||
if (err.code === 'ENOENT') {
|
||||
return null
|
||||
}
|
||||
// istanbul ignore next: unsure how to cover.
|
||||
throw err
|
||||
}),
|
||||
])
|
||||
}
|
||||
|
||||
async function checkParentDir (destStat, src, dest, opts) {
|
||||
const destParent = dirname(dest)
|
||||
const dirExists = await pathExists(destParent)
|
||||
if (dirExists) {
|
||||
return getStatsForCopy(destStat, src, dest, opts)
|
||||
}
|
||||
await mkdir(destParent, { recursive: true })
|
||||
return getStatsForCopy(destStat, src, dest, opts)
|
||||
}
|
||||
|
||||
function pathExists (dest) {
|
||||
return stat(dest).then(
|
||||
() => true,
|
||||
// istanbul ignore next: not sure when this would occur
|
||||
(err) => (err.code === 'ENOENT' ? false : Promise.reject(err)))
|
||||
}
|
||||
|
||||
// Recursively check if dest parent is a subdirectory of src.
|
||||
// It works for all file types including symlinks since it
|
||||
// checks the src and dest inodes. It starts from the deepest
|
||||
// parent and stops once it reaches the src parent or the root path.
|
||||
async function checkParentPaths (src, srcStat, dest) {
|
||||
const srcParent = resolve(dirname(src))
|
||||
const destParent = resolve(dirname(dest))
|
||||
if (destParent === srcParent || destParent === parse(destParent).root) {
|
||||
return
|
||||
}
|
||||
let destStat
|
||||
try {
|
||||
destStat = await stat(destParent, { bigint: true })
|
||||
} catch (err) {
|
||||
// istanbul ignore else: not sure when this would occur
|
||||
if (err.code === 'ENOENT') {
|
||||
return
|
||||
}
|
||||
// istanbul ignore next: not sure when this would occur
|
||||
throw err
|
||||
}
|
||||
if (areIdentical(srcStat, destStat)) {
|
||||
throw new ERR_FS_CP_EINVAL({
|
||||
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
return checkParentPaths(src, srcStat, destParent)
|
||||
}
|
||||
|
||||
const normalizePathToArray = (path) =>
|
||||
resolve(path).split(sep).filter(Boolean)
|
||||
|
||||
// Return true if dest is a subdir of src, otherwise false.
|
||||
// It only checks the path strings.
|
||||
function isSrcSubdir (src, dest) {
|
||||
const srcArr = normalizePathToArray(src)
|
||||
const destArr = normalizePathToArray(dest)
|
||||
return srcArr.every((cur, i) => destArr[i] === cur)
|
||||
}
|
||||
|
||||
async function handleFilter (onInclude, destStat, src, dest, opts, cb) {
|
||||
const include = await opts.filter(src, dest)
|
||||
if (include) {
|
||||
return onInclude(destStat, src, dest, opts, cb)
|
||||
}
|
||||
}
|
||||
|
||||
function startCopy (destStat, src, dest, opts) {
|
||||
if (opts.filter) {
|
||||
return handleFilter(getStatsForCopy, destStat, src, dest, opts)
|
||||
}
|
||||
return getStatsForCopy(destStat, src, dest, opts)
|
||||
}
|
||||
|
||||
async function getStatsForCopy (destStat, src, dest, opts) {
|
||||
const statFn = opts.dereference ? stat : lstat
|
||||
const srcStat = await statFn(src)
|
||||
// istanbul ignore else: can't portably test FIFO
|
||||
if (srcStat.isDirectory() && opts.recursive) {
|
||||
return onDir(srcStat, destStat, src, dest, opts)
|
||||
} else if (srcStat.isDirectory()) {
|
||||
throw new ERR_FS_EISDIR({
|
||||
message: `${src} is a directory (not copied)`,
|
||||
path: src,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
} else if (srcStat.isFile() ||
|
||||
srcStat.isCharacterDevice() ||
|
||||
srcStat.isBlockDevice()) {
|
||||
return onFile(srcStat, destStat, src, dest, opts)
|
||||
} else if (srcStat.isSymbolicLink()) {
|
||||
return onLink(destStat, src, dest)
|
||||
} else if (srcStat.isSocket()) {
|
||||
throw new ERR_FS_CP_SOCKET({
|
||||
message: `cannot copy a socket file: ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
} else if (srcStat.isFIFO()) {
|
||||
throw new ERR_FS_CP_FIFO_PIPE({
|
||||
message: `cannot copy a FIFO pipe: ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
// istanbul ignore next: should be unreachable
|
||||
throw new ERR_FS_CP_UNKNOWN({
|
||||
message: `cannot copy an unknown file type: ${dest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
|
||||
function onFile (srcStat, destStat, src, dest, opts) {
|
||||
if (!destStat) {
|
||||
return _copyFile(srcStat, src, dest, opts)
|
||||
}
|
||||
return mayCopyFile(srcStat, src, dest, opts)
|
||||
}
|
||||
|
||||
async function mayCopyFile (srcStat, src, dest, opts) {
|
||||
if (opts.force) {
|
||||
await unlink(dest)
|
||||
return _copyFile(srcStat, src, dest, opts)
|
||||
} else if (opts.errorOnExist) {
|
||||
throw new ERR_FS_CP_EEXIST({
|
||||
message: `${dest} already exists`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EEXIST,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
async function _copyFile (srcStat, src, dest, opts) {
|
||||
await copyFile(src, dest)
|
||||
if (opts.preserveTimestamps) {
|
||||
return handleTimestampsAndMode(srcStat.mode, src, dest)
|
||||
}
|
||||
return setDestMode(dest, srcStat.mode)
|
||||
}
|
||||
|
||||
async function handleTimestampsAndMode (srcMode, src, dest) {
|
||||
// Make sure the file is writable before setting the timestamp
|
||||
// otherwise open fails with EPERM when invoked with 'r+'
|
||||
// (through utimes call)
|
||||
if (fileIsNotWritable(srcMode)) {
|
||||
await makeFileWritable(dest, srcMode)
|
||||
return setDestTimestampsAndMode(srcMode, src, dest)
|
||||
}
|
||||
return setDestTimestampsAndMode(srcMode, src, dest)
|
||||
}
|
||||
|
||||
function fileIsNotWritable (srcMode) {
|
||||
return (srcMode & 0o200) === 0
|
||||
}
|
||||
|
||||
function makeFileWritable (dest, srcMode) {
|
||||
return setDestMode(dest, srcMode | 0o200)
|
||||
}
|
||||
|
||||
async function setDestTimestampsAndMode (srcMode, src, dest) {
|
||||
await setDestTimestamps(src, dest)
|
||||
return setDestMode(dest, srcMode)
|
||||
}
|
||||
|
||||
function setDestMode (dest, srcMode) {
|
||||
return chmod(dest, srcMode)
|
||||
}
|
||||
|
||||
async function setDestTimestamps (src, dest) {
|
||||
// The initial srcStat.atime cannot be trusted
|
||||
// because it is modified by the read(2) system call
|
||||
// (See https://nodejs.org/api/fs.html#fs_stat_time_values)
|
||||
const updatedSrcStat = await stat(src)
|
||||
return utimes(dest, updatedSrcStat.atime, updatedSrcStat.mtime)
|
||||
}
|
||||
|
||||
function onDir (srcStat, destStat, src, dest, opts) {
|
||||
if (!destStat) {
|
||||
return mkDirAndCopy(srcStat.mode, src, dest, opts)
|
||||
}
|
||||
return copyDir(src, dest, opts)
|
||||
}
|
||||
|
||||
async function mkDirAndCopy (srcMode, src, dest, opts) {
|
||||
await mkdir(dest)
|
||||
await copyDir(src, dest, opts)
|
||||
return setDestMode(dest, srcMode)
|
||||
}
|
||||
|
||||
async function copyDir (src, dest, opts) {
|
||||
const dir = await readdir(src)
|
||||
for (let i = 0; i < dir.length; i++) {
|
||||
const item = dir[i]
|
||||
const srcItem = join(src, item)
|
||||
const destItem = join(dest, item)
|
||||
const { destStat } = await checkPaths(srcItem, destItem, opts)
|
||||
await startCopy(destStat, srcItem, destItem, opts)
|
||||
}
|
||||
}
|
||||
|
||||
async function onLink (destStat, src, dest) {
|
||||
let resolvedSrc = await readlink(src)
|
||||
if (!isAbsolute(resolvedSrc)) {
|
||||
resolvedSrc = resolve(dirname(src), resolvedSrc)
|
||||
}
|
||||
if (!destStat) {
|
||||
return symlink(resolvedSrc, dest)
|
||||
}
|
||||
let resolvedDest
|
||||
try {
|
||||
resolvedDest = await readlink(dest)
|
||||
} catch (err) {
|
||||
// Dest exists and is a regular file or directory,
|
||||
// Windows may throw UNKNOWN error. If dest already exists,
|
||||
// fs throws error anyway, so no need to guard against it here.
|
||||
// istanbul ignore next: can only test on windows
|
||||
if (err.code === 'EINVAL' || err.code === 'UNKNOWN') {
|
||||
return symlink(resolvedSrc, dest)
|
||||
}
|
||||
// istanbul ignore next: should not be possible
|
||||
throw err
|
||||
}
|
||||
if (!isAbsolute(resolvedDest)) {
|
||||
resolvedDest = resolve(dirname(dest), resolvedDest)
|
||||
}
|
||||
if (isSrcSubdir(resolvedSrc, resolvedDest)) {
|
||||
throw new ERR_FS_CP_EINVAL({
|
||||
message: `cannot copy ${resolvedSrc} to a subdirectory of self ` +
|
||||
`${resolvedDest}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
// Do not copy if src is a subdir of dest since unlinking
|
||||
// dest in this case would result in removing src contents
|
||||
// and therefore a broken symlink would be created.
|
||||
const srcStat = await stat(src)
|
||||
if (srcStat.isDirectory() && isSrcSubdir(resolvedDest, resolvedSrc)) {
|
||||
throw new ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY({
|
||||
message: `cannot overwrite ${resolvedDest} with ${resolvedSrc}`,
|
||||
path: dest,
|
||||
syscall: 'cp',
|
||||
errno: EINVAL,
|
||||
})
|
||||
}
|
||||
return copyLink(resolvedSrc, dest)
|
||||
}
|
||||
|
||||
async function copyLink (resolvedSrc, dest) {
|
||||
await unlink(dest)
|
||||
return symlink(resolvedSrc, dest)
|
||||
}
|
||||
|
||||
module.exports = cp
|
||||
13
node_modules/@npmcli/fs/lib/index.js
generated
vendored
Normal file
13
node_modules/@npmcli/fs/lib/index.js
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
'use strict'
|
||||
|
||||
const cp = require('./cp/index.js')
|
||||
const withTempDir = require('./with-temp-dir.js')
|
||||
const readdirScoped = require('./readdir-scoped.js')
|
||||
const moveFile = require('./move-file.js')
|
||||
|
||||
module.exports = {
|
||||
cp,
|
||||
withTempDir,
|
||||
readdirScoped,
|
||||
moveFile,
|
||||
}
|
||||
78
node_modules/@npmcli/fs/lib/move-file.js
generated
vendored
Normal file
78
node_modules/@npmcli/fs/lib/move-file.js
generated
vendored
Normal file
@ -0,0 +1,78 @@
|
||||
const { dirname, join, resolve, relative, isAbsolute } = require('path')
|
||||
const fs = require('fs/promises')
|
||||
|
||||
const pathExists = async path => {
|
||||
try {
|
||||
await fs.access(path)
|
||||
return true
|
||||
} catch (er) {
|
||||
return er.code !== 'ENOENT'
|
||||
}
|
||||
}
|
||||
|
||||
const moveFile = async (source, destination, options = {}, root = true, symlinks = []) => {
|
||||
if (!source || !destination) {
|
||||
throw new TypeError('`source` and `destination` file required')
|
||||
}
|
||||
|
||||
options = {
|
||||
overwrite: true,
|
||||
...options,
|
||||
}
|
||||
|
||||
if (!options.overwrite && await pathExists(destination)) {
|
||||
throw new Error(`The destination file exists: ${destination}`)
|
||||
}
|
||||
|
||||
await fs.mkdir(dirname(destination), { recursive: true })
|
||||
|
||||
try {
|
||||
await fs.rename(source, destination)
|
||||
} catch (error) {
|
||||
if (error.code === 'EXDEV' || error.code === 'EPERM') {
|
||||
const sourceStat = await fs.lstat(source)
|
||||
if (sourceStat.isDirectory()) {
|
||||
const files = await fs.readdir(source)
|
||||
await Promise.all(files.map((file) =>
|
||||
moveFile(join(source, file), join(destination, file), options, false, symlinks)
|
||||
))
|
||||
} else if (sourceStat.isSymbolicLink()) {
|
||||
symlinks.push({ source, destination })
|
||||
} else {
|
||||
await fs.copyFile(source, destination)
|
||||
}
|
||||
} else {
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
if (root) {
|
||||
await Promise.all(symlinks.map(async ({ source: symSource, destination: symDestination }) => {
|
||||
let target = await fs.readlink(symSource)
|
||||
// junction symlinks in windows will be absolute paths, so we need to
|
||||
// make sure they point to the symlink destination
|
||||
if (isAbsolute(target)) {
|
||||
target = resolve(symDestination, relative(symSource, target))
|
||||
}
|
||||
// try to determine what the actual file is so we can create the correct
|
||||
// type of symlink in windows
|
||||
let targetStat = 'file'
|
||||
try {
|
||||
targetStat = await fs.stat(resolve(dirname(symSource), target))
|
||||
if (targetStat.isDirectory()) {
|
||||
targetStat = 'junction'
|
||||
}
|
||||
} catch {
|
||||
// targetStat remains 'file'
|
||||
}
|
||||
await fs.symlink(
|
||||
target,
|
||||
symDestination,
|
||||
targetStat
|
||||
)
|
||||
}))
|
||||
await fs.rm(source, { recursive: true, force: true })
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = moveFile
|
||||
20
node_modules/@npmcli/fs/lib/readdir-scoped.js
generated
vendored
Normal file
20
node_modules/@npmcli/fs/lib/readdir-scoped.js
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
const { readdir } = require('fs/promises')
|
||||
const { join } = require('path')
|
||||
|
||||
const readdirScoped = async (dir) => {
|
||||
const results = []
|
||||
|
||||
for (const item of await readdir(dir)) {
|
||||
if (item.startsWith('@')) {
|
||||
for (const scopedItem of await readdir(join(dir, item))) {
|
||||
results.push(join(item, scopedItem))
|
||||
}
|
||||
} else {
|
||||
results.push(item)
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
module.exports = readdirScoped
|
||||
39
node_modules/@npmcli/fs/lib/with-temp-dir.js
generated
vendored
Normal file
39
node_modules/@npmcli/fs/lib/with-temp-dir.js
generated
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
const { join, sep } = require('path')
|
||||
|
||||
const getOptions = require('./common/get-options.js')
|
||||
const { mkdir, mkdtemp, rm } = require('fs/promises')
|
||||
|
||||
// create a temp directory, ensure its permissions match its parent, then call
|
||||
// the supplied function passing it the path to the directory. clean up after
|
||||
// the function finishes, whether it throws or not
|
||||
const withTempDir = async (root, fn, opts) => {
|
||||
const options = getOptions(opts, {
|
||||
copy: ['tmpPrefix'],
|
||||
})
|
||||
// create the directory
|
||||
await mkdir(root, { recursive: true })
|
||||
|
||||
const target = await mkdtemp(join(`${root}${sep}`, options.tmpPrefix || ''))
|
||||
let err
|
||||
let result
|
||||
|
||||
try {
|
||||
result = await fn(target)
|
||||
} catch (_err) {
|
||||
err = _err
|
||||
}
|
||||
|
||||
try {
|
||||
await rm(target, { force: true, recursive: true })
|
||||
} catch {
|
||||
// ignore errors
|
||||
}
|
||||
|
||||
if (err) {
|
||||
throw err
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
module.exports = withTempDir
|
||||
54
node_modules/@npmcli/fs/package.json
generated
vendored
Normal file
54
node_modules/@npmcli/fs/package.json
generated
vendored
Normal file
@ -0,0 +1,54 @@
|
||||
{
|
||||
"name": "@npmcli/fs",
|
||||
"version": "5.0.0",
|
||||
"description": "filesystem utilities for the npm cli",
|
||||
"main": "lib/index.js",
|
||||
"files": [
|
||||
"bin/",
|
||||
"lib/"
|
||||
],
|
||||
"scripts": {
|
||||
"snap": "tap",
|
||||
"test": "tap",
|
||||
"npmclilint": "npmcli-lint",
|
||||
"lint": "npm run eslint",
|
||||
"lintfix": "npm run eslint -- --fix",
|
||||
"posttest": "npm run lint",
|
||||
"postsnap": "npm run lintfix --",
|
||||
"postlint": "template-oss-check",
|
||||
"template-oss-apply": "template-oss-apply --force",
|
||||
"eslint": "eslint \"**/*.{js,cjs,ts,mjs,jsx,tsx}\""
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/npm/fs.git"
|
||||
},
|
||||
"keywords": [
|
||||
"npm",
|
||||
"oss"
|
||||
],
|
||||
"author": "GitHub Inc.",
|
||||
"license": "ISC",
|
||||
"devDependencies": {
|
||||
"@npmcli/eslint-config": "^5.0.0",
|
||||
"@npmcli/template-oss": "4.27.1",
|
||||
"tap": "^16.0.1"
|
||||
},
|
||||
"dependencies": {
|
||||
"semver": "^7.3.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": "^20.17.0 || >=22.9.0"
|
||||
},
|
||||
"templateOSS": {
|
||||
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
|
||||
"version": "4.27.1",
|
||||
"publish": true
|
||||
},
|
||||
"tap": {
|
||||
"nyc-arg": [
|
||||
"--exclude",
|
||||
"tap-snapshots/**"
|
||||
]
|
||||
}
|
||||
}
|
||||
21
node_modules/@npmcli/redact/LICENSE
generated
vendored
Normal file
21
node_modules/@npmcli/redact/LICENSE
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 npm
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
42
node_modules/@npmcli/redact/README.md
generated
vendored
Normal file
42
node_modules/@npmcli/redact/README.md
generated
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
# @npmcli/redact
|
||||
|
||||
Redact sensitive npm information from output.
|
||||
|
||||
## API
|
||||
|
||||
This will redact `npm_` prefixed tokens and UUIDs from values.
|
||||
|
||||
It will also replace passwords in stringified URLs.
|
||||
|
||||
### `redact(string)`
|
||||
|
||||
Redact values from a single value
|
||||
|
||||
```js
|
||||
const { redact } = require('@npmcli/redact')
|
||||
|
||||
redact('https://user:pass@registry.npmjs.org/')
|
||||
// https://user:***@registry.npmjs.org/
|
||||
|
||||
redact(`https://registry.npmjs.org/path/npm_${'a'.repeat('36')}`)
|
||||
// https://registry.npmjs.org/path/npm_***
|
||||
```
|
||||
|
||||
### `redactLog(string | string[])`
|
||||
|
||||
Redact values from a string or array of strings.
|
||||
|
||||
This method will also split all strings on `\s` and `=` and iterate over them.
|
||||
|
||||
```js
|
||||
const { redactLog } = require('@npmcli/redact')
|
||||
|
||||
redactLog([
|
||||
'Something --x=https://user:pass@registry.npmjs.org/ foo bar',
|
||||
'--url=http://foo:bar@registry.npmjs.org',
|
||||
])
|
||||
// [
|
||||
// 'Something --x=https://user:***@registry.npmjs.org/ foo bar',
|
||||
// '--url=http://foo:***@registry.npmjs.org/',
|
||||
// ]
|
||||
```
|
||||
71
node_modules/@npmcli/redact/lib/deep-map.js
generated
vendored
Normal file
71
node_modules/@npmcli/redact/lib/deep-map.js
generated
vendored
Normal file
@ -0,0 +1,71 @@
|
||||
const { serializeError } = require('./error')
|
||||
|
||||
const deepMap = (input, handler = v => v, path = ['$'], seen = new Set([input])) => {
|
||||
// this is in an effort to maintain bole's error logging behavior
|
||||
if (path.join('.') === '$' && input instanceof Error) {
|
||||
return deepMap({ err: serializeError(input) }, handler, path, seen)
|
||||
}
|
||||
if (input instanceof Error) {
|
||||
return deepMap(serializeError(input), handler, path, seen)
|
||||
}
|
||||
// allows for non-node js environments, sush as workers
|
||||
if (typeof Buffer !== 'undefined' && input instanceof Buffer) {
|
||||
return `[unable to log instanceof buffer]`
|
||||
}
|
||||
if (input instanceof Uint8Array) {
|
||||
return `[unable to log instanceof Uint8Array]`
|
||||
}
|
||||
|
||||
if (Array.isArray(input)) {
|
||||
const result = []
|
||||
for (let i = 0; i < input.length; i++) {
|
||||
const element = input[i]
|
||||
const elementPath = [...path, i]
|
||||
if (element instanceof Object) {
|
||||
if (!seen.has(element)) { // avoid getting stuck in circular reference
|
||||
seen.add(element)
|
||||
result.push(deepMap(handler(element, elementPath), handler, elementPath, seen))
|
||||
}
|
||||
} else {
|
||||
result.push(handler(element, elementPath))
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
if (input === null) {
|
||||
return null
|
||||
} else if (typeof input === 'object' || typeof input === 'function') {
|
||||
const result = {}
|
||||
|
||||
for (const propertyName of Object.getOwnPropertyNames(input)) {
|
||||
// skip logging internal properties
|
||||
if (propertyName.startsWith('_')) {
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const property = input[propertyName]
|
||||
const propertyPath = [...path, propertyName]
|
||||
if (property instanceof Object) {
|
||||
if (!seen.has(property)) { // avoid getting stuck in circular reference
|
||||
seen.add(property)
|
||||
result[propertyName] = deepMap(
|
||||
handler(property, propertyPath), handler, propertyPath, seen
|
||||
)
|
||||
}
|
||||
} else {
|
||||
result[propertyName] = handler(property, propertyPath)
|
||||
}
|
||||
} catch (err) {
|
||||
// a getter may throw an error
|
||||
result[propertyName] = `[error getting value: ${err.message}]`
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
return handler(input, path)
|
||||
}
|
||||
|
||||
module.exports = { deepMap }
|
||||
28
node_modules/@npmcli/redact/lib/error.js
generated
vendored
Normal file
28
node_modules/@npmcli/redact/lib/error.js
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
/** takes an error object and serializes it to a plan object */
|
||||
function serializeError (input) {
|
||||
if (!(input instanceof Error)) {
|
||||
if (typeof input === 'string') {
|
||||
const error = new Error(`attempted to serialize a non-error, string String, "${input}"`)
|
||||
return serializeError(error)
|
||||
}
|
||||
const error = new Error(`attempted to serialize a non-error, ${typeof input} ${input?.constructor?.name}`)
|
||||
return serializeError(error)
|
||||
}
|
||||
// different error objects store status code differently
|
||||
// AxiosError uses `status`, other services use `statusCode`
|
||||
const statusCode = input.statusCode ?? input.status
|
||||
// CAUTION: what we serialize here gets add to the size of logs
|
||||
return {
|
||||
errorType: input.errorType ?? input.constructor.name,
|
||||
...(input.message ? { message: input.message } : {}),
|
||||
...(input.stack ? { stack: input.stack } : {}),
|
||||
// think of this as error code
|
||||
...(input.code ? { code: input.code } : {}),
|
||||
// think of this as http status code
|
||||
...(statusCode ? { statusCode } : {}),
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
serializeError,
|
||||
}
|
||||
44
node_modules/@npmcli/redact/lib/index.js
generated
vendored
Normal file
44
node_modules/@npmcli/redact/lib/index.js
generated
vendored
Normal file
@ -0,0 +1,44 @@
|
||||
const matchers = require('./matchers')
|
||||
const { redactUrlPassword } = require('./utils')
|
||||
|
||||
const REPLACE = '***'
|
||||
|
||||
const redact = (value) => {
|
||||
if (typeof value !== 'string' || !value) {
|
||||
return value
|
||||
}
|
||||
return redactUrlPassword(value, REPLACE)
|
||||
.replace(matchers.NPM_SECRET.pattern, `npm_${REPLACE}`)
|
||||
.replace(matchers.UUID.pattern, REPLACE)
|
||||
}
|
||||
|
||||
// split on \s|= similar to how nopt parses options
|
||||
const splitAndRedact = (str) => {
|
||||
// stateful regex, don't move out of this scope
|
||||
const splitChars = /[\s=]/g
|
||||
|
||||
let match = null
|
||||
let result = ''
|
||||
let index = 0
|
||||
while (match = splitChars.exec(str)) {
|
||||
result += redact(str.slice(index, match.index)) + match[0]
|
||||
index = splitChars.lastIndex
|
||||
}
|
||||
|
||||
return result + redact(str.slice(index))
|
||||
}
|
||||
|
||||
// replaces auth info in an array of arguments or in a strings
|
||||
const redactLog = (arg) => {
|
||||
if (typeof arg === 'string') {
|
||||
return splitAndRedact(arg)
|
||||
} else if (Array.isArray(arg)) {
|
||||
return arg.map((a) => typeof a === 'string' ? splitAndRedact(a) : a)
|
||||
}
|
||||
return arg
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
redact,
|
||||
redactLog,
|
||||
}
|
||||
88
node_modules/@npmcli/redact/lib/matchers.js
generated
vendored
Normal file
88
node_modules/@npmcli/redact/lib/matchers.js
generated
vendored
Normal file
@ -0,0 +1,88 @@
|
||||
const TYPE_REGEX = 'regex'
|
||||
const TYPE_URL = 'url'
|
||||
const TYPE_PATH = 'path'
|
||||
|
||||
const NPM_SECRET = {
|
||||
type: TYPE_REGEX,
|
||||
pattern: /\b(npms?_)[a-zA-Z0-9]{36,48}\b/gi,
|
||||
replacement: `[REDACTED_NPM_SECRET]`,
|
||||
}
|
||||
|
||||
const AUTH_HEADER = {
|
||||
type: TYPE_REGEX,
|
||||
pattern: /\b(Basic\s+|Bearer\s+)[\w+=\-.]+\b/gi,
|
||||
replacement: `[REDACTED_AUTH_HEADER]`,
|
||||
}
|
||||
|
||||
const JSON_WEB_TOKEN = {
|
||||
type: TYPE_REGEX,
|
||||
pattern: /\b[A-Za-z0-9-_]{10,}(?!\.\d+\.)\.[A-Za-z0-9-_]{3,}\.[A-Za-z0-9-_]{20,}\b/gi,
|
||||
replacement: `[REDACTED_JSON_WEB_TOKEN]`,
|
||||
}
|
||||
|
||||
const UUID = {
|
||||
type: TYPE_REGEX,
|
||||
pattern: /\b[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\b/gi,
|
||||
replacement: `[REDACTED_UUID]`,
|
||||
}
|
||||
|
||||
const URL_MATCHER = {
|
||||
type: TYPE_REGEX,
|
||||
pattern: /(?:https?|ftp):\/\/[^\s/"$.?#].[^\s"]*/gi,
|
||||
replacement: '[REDACTED_URL]',
|
||||
}
|
||||
|
||||
const DEEP_HEADER_AUTHORIZATION = {
|
||||
type: TYPE_PATH,
|
||||
predicate: ({ path }) => path.endsWith('.headers.authorization'),
|
||||
replacement: '[REDACTED_HEADER_AUTHORIZATION]',
|
||||
}
|
||||
|
||||
const DEEP_HEADER_SET_COOKIE = {
|
||||
type: TYPE_PATH,
|
||||
predicate: ({ path }) => path.endsWith('.headers.set-cookie'),
|
||||
replacement: '[REDACTED_HEADER_SET_COOKIE]',
|
||||
}
|
||||
|
||||
const DEEP_HEADER_COOKIE = {
|
||||
type: TYPE_PATH,
|
||||
predicate: ({ path }) => path.endsWith('.headers.cookie'),
|
||||
replacement: '[REDACTED_HEADER_COOKIE]',
|
||||
}
|
||||
|
||||
const REWRITE_REQUEST = {
|
||||
type: TYPE_PATH,
|
||||
predicate: ({ path }) => path.endsWith('.request'),
|
||||
replacement: (input) => ({
|
||||
method: input?.method,
|
||||
path: input?.path,
|
||||
headers: input?.headers,
|
||||
url: input?.url,
|
||||
}),
|
||||
}
|
||||
|
||||
const REWRITE_RESPONSE = {
|
||||
type: TYPE_PATH,
|
||||
predicate: ({ path }) => path.endsWith('.response'),
|
||||
replacement: (input) => ({
|
||||
data: input?.data,
|
||||
status: input?.status,
|
||||
headers: input?.headers,
|
||||
}),
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
TYPE_REGEX,
|
||||
TYPE_URL,
|
||||
TYPE_PATH,
|
||||
NPM_SECRET,
|
||||
AUTH_HEADER,
|
||||
JSON_WEB_TOKEN,
|
||||
UUID,
|
||||
URL_MATCHER,
|
||||
DEEP_HEADER_AUTHORIZATION,
|
||||
DEEP_HEADER_SET_COOKIE,
|
||||
DEEP_HEADER_COOKIE,
|
||||
REWRITE_REQUEST,
|
||||
REWRITE_RESPONSE,
|
||||
}
|
||||
59
node_modules/@npmcli/redact/lib/server.js
generated
vendored
Normal file
59
node_modules/@npmcli/redact/lib/server.js
generated
vendored
Normal file
@ -0,0 +1,59 @@
|
||||
const {
|
||||
AUTH_HEADER,
|
||||
JSON_WEB_TOKEN,
|
||||
NPM_SECRET,
|
||||
DEEP_HEADER_AUTHORIZATION,
|
||||
DEEP_HEADER_SET_COOKIE,
|
||||
REWRITE_REQUEST,
|
||||
REWRITE_RESPONSE,
|
||||
DEEP_HEADER_COOKIE,
|
||||
} = require('./matchers')
|
||||
|
||||
const {
|
||||
redactUrlMatcher,
|
||||
redactUrlPasswordMatcher,
|
||||
redactMatchers,
|
||||
} = require('./utils')
|
||||
|
||||
const { serializeError } = require('./error')
|
||||
|
||||
const { deepMap } = require('./deep-map')
|
||||
|
||||
const _redact = redactMatchers(
|
||||
NPM_SECRET,
|
||||
AUTH_HEADER,
|
||||
JSON_WEB_TOKEN,
|
||||
DEEP_HEADER_AUTHORIZATION,
|
||||
DEEP_HEADER_SET_COOKIE,
|
||||
DEEP_HEADER_COOKIE,
|
||||
REWRITE_REQUEST,
|
||||
REWRITE_RESPONSE,
|
||||
redactUrlMatcher(
|
||||
redactUrlPasswordMatcher()
|
||||
)
|
||||
)
|
||||
|
||||
const redact = (input) => deepMap(input, (value, path) => _redact(value, { path }))
|
||||
|
||||
/** takes an error returns new error keeping some custom properties */
|
||||
function redactError (input) {
|
||||
const { message, ...data } = serializeError(input)
|
||||
const output = new Error(redact(message))
|
||||
return Object.assign(output, redact(data))
|
||||
}
|
||||
|
||||
/** runs a function within try / catch and throws error wrapped in redactError */
|
||||
function redactThrow (func) {
|
||||
if (typeof func !== 'function') {
|
||||
throw new Error('redactThrow expects a function')
|
||||
}
|
||||
return async (...args) => {
|
||||
try {
|
||||
return await func(...args)
|
||||
} catch (error) {
|
||||
throw redactError(error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { redact, redactError, redactThrow }
|
||||
202
node_modules/@npmcli/redact/lib/utils.js
generated
vendored
Normal file
202
node_modules/@npmcli/redact/lib/utils.js
generated
vendored
Normal file
@ -0,0 +1,202 @@
|
||||
const {
|
||||
URL_MATCHER,
|
||||
TYPE_URL,
|
||||
TYPE_REGEX,
|
||||
TYPE_PATH,
|
||||
} = require('./matchers')
|
||||
|
||||
/**
|
||||
* creates a string of asterisks,
|
||||
* this forces a minimum asterisk for security purposes
|
||||
*/
|
||||
const asterisk = (length = 0) => {
|
||||
length = typeof length === 'string' ? length.length : length
|
||||
if (length < 8) {
|
||||
return '*'.repeat(8)
|
||||
}
|
||||
return '*'.repeat(length)
|
||||
}
|
||||
|
||||
/**
|
||||
* escapes all special regex chars
|
||||
* @see https://stackoverflow.com/a/9310752
|
||||
* @see https://github.com/tc39/proposal-regex-escaping
|
||||
*/
|
||||
const escapeRegExp = (text) => {
|
||||
return text.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, `\\$&`)
|
||||
}
|
||||
|
||||
/**
|
||||
* provieds a regex "or" of the url versions of a string
|
||||
*/
|
||||
const urlEncodeRegexGroup = (value) => {
|
||||
const decoded = decodeURIComponent(value)
|
||||
const encoded = encodeURIComponent(value)
|
||||
const union = [...new Set([encoded, decoded, value])].map(escapeRegExp).join('|')
|
||||
return union
|
||||
}
|
||||
|
||||
/**
|
||||
* a tagged template literal that returns a regex ensures all variables are excaped
|
||||
*/
|
||||
const urlEncodeRegexTag = (strings, ...values) => {
|
||||
let pattern = ''
|
||||
for (let i = 0; i < values.length; i++) {
|
||||
pattern += strings[i] + `(${urlEncodeRegexGroup(values[i])})`
|
||||
}
|
||||
pattern += strings[strings.length - 1]
|
||||
return new RegExp(pattern)
|
||||
}
|
||||
|
||||
/**
|
||||
* creates a matcher for redacting url hostname
|
||||
*/
|
||||
const redactUrlHostnameMatcher = ({ hostname, replacement } = {}) => ({
|
||||
type: TYPE_URL,
|
||||
predicate: ({ url }) => url.hostname === hostname,
|
||||
pattern: ({ url }) => {
|
||||
return urlEncodeRegexTag`(^${url.protocol}//${url.username}:.+@)?${url.hostname}`
|
||||
},
|
||||
replacement: `$1${replacement || asterisk()}`,
|
||||
})
|
||||
|
||||
/**
|
||||
* creates a matcher for redacting url search / query parameter values
|
||||
*/
|
||||
const redactUrlSearchParamsMatcher = ({ param, replacement } = {}) => ({
|
||||
type: TYPE_URL,
|
||||
predicate: ({ url }) => url.searchParams.has(param),
|
||||
pattern: ({ url }) => urlEncodeRegexTag`(${param}=)${url.searchParams.get(param)}`,
|
||||
replacement: `$1${replacement || asterisk()}`,
|
||||
})
|
||||
|
||||
/** creates a matcher for redacting the url password */
|
||||
const redactUrlPasswordMatcher = ({ replacement } = {}) => ({
|
||||
type: TYPE_URL,
|
||||
predicate: ({ url }) => url.password,
|
||||
pattern: ({ url }) => urlEncodeRegexTag`(^${url.protocol}//${url.username}:)${url.password}`,
|
||||
replacement: `$1${replacement || asterisk()}`,
|
||||
})
|
||||
|
||||
const redactUrlReplacement = (...matchers) => (subValue) => {
|
||||
try {
|
||||
const url = new URL(subValue)
|
||||
return redactMatchers(...matchers)(subValue, { url })
|
||||
} catch (err) {
|
||||
return subValue
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* creates a matcher / submatcher for urls, this function allows you to first
|
||||
* collect all urls within a larger string and then pass those urls to a
|
||||
* submatcher
|
||||
*
|
||||
* @example
|
||||
* console.log("this will first match all urls, then pass those urls to the password patcher")
|
||||
* redactMatchers(redactUrlMatcher(redactUrlPasswordMatcher()))
|
||||
*
|
||||
* @example
|
||||
* console.log(
|
||||
* "this will assume you are passing in a string that is a url, and will redact the password"
|
||||
* )
|
||||
* redactMatchers(redactUrlPasswordMatcher())
|
||||
*
|
||||
*/
|
||||
const redactUrlMatcher = (...matchers) => {
|
||||
return {
|
||||
...URL_MATCHER,
|
||||
replacement: redactUrlReplacement(...matchers),
|
||||
}
|
||||
}
|
||||
|
||||
const matcherFunctions = {
|
||||
[TYPE_REGEX]: (matcher) => (value) => {
|
||||
if (typeof value === 'string') {
|
||||
value = value.replace(matcher.pattern, matcher.replacement)
|
||||
}
|
||||
return value
|
||||
},
|
||||
[TYPE_URL]: (matcher) => (value, ctx) => {
|
||||
if (typeof value === 'string') {
|
||||
try {
|
||||
const url = ctx?.url || new URL(value)
|
||||
const { predicate, pattern } = matcher
|
||||
const predicateValue = predicate({ url })
|
||||
if (predicateValue) {
|
||||
value = value.replace(pattern({ url }), matcher.replacement)
|
||||
}
|
||||
} catch (_e) {
|
||||
return value
|
||||
}
|
||||
}
|
||||
return value
|
||||
},
|
||||
[TYPE_PATH]: (matcher) => (value, ctx) => {
|
||||
const rawPath = ctx?.path
|
||||
const path = rawPath.join('.').toLowerCase()
|
||||
const { predicate, replacement } = matcher
|
||||
const replace = typeof replacement === 'function' ? replacement : () => replacement
|
||||
const shouldRun = predicate({ rawPath, path })
|
||||
if (shouldRun) {
|
||||
value = replace(value, { rawPath, path })
|
||||
}
|
||||
return value
|
||||
},
|
||||
}
|
||||
|
||||
/** converts a matcher to a function */
|
||||
const redactMatcher = (matcher) => {
|
||||
return matcherFunctions[matcher.type](matcher)
|
||||
}
|
||||
|
||||
/** converts a series of matchers to a function */
|
||||
const redactMatchers = (...matchers) => (value, ctx) => {
|
||||
const flatMatchers = matchers.flat()
|
||||
return flatMatchers.reduce((result, matcher) => {
|
||||
const fn = (typeof matcher === 'function') ? matcher : redactMatcher(matcher)
|
||||
return fn(result, ctx)
|
||||
}, value)
|
||||
}
|
||||
|
||||
/**
|
||||
* replacement handler, keeping $1 (if it exists) and replacing the
|
||||
* rest of the string with asterisks, maintaining string length
|
||||
*/
|
||||
const redactDynamicReplacement = () => (value, start) => {
|
||||
if (typeof start === 'number') {
|
||||
return asterisk(value)
|
||||
}
|
||||
return start + asterisk(value.substring(start.length).length)
|
||||
}
|
||||
|
||||
/**
|
||||
* replacement handler, keeping $1 (if it exists) and replacing the
|
||||
* rest of the string with a fixed number of asterisks
|
||||
*/
|
||||
const redactFixedReplacement = (length) => (_value, start) => {
|
||||
if (typeof start === 'number') {
|
||||
return asterisk(length)
|
||||
}
|
||||
return start + asterisk(length)
|
||||
}
|
||||
|
||||
const redactUrlPassword = (value, replacement) => {
|
||||
return redactMatchers(redactUrlPasswordMatcher({ replacement }))(value)
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
asterisk,
|
||||
escapeRegExp,
|
||||
urlEncodeRegexGroup,
|
||||
urlEncodeRegexTag,
|
||||
redactUrlHostnameMatcher,
|
||||
redactUrlSearchParamsMatcher,
|
||||
redactUrlPasswordMatcher,
|
||||
redactUrlMatcher,
|
||||
redactUrlReplacement,
|
||||
redactDynamicReplacement,
|
||||
redactFixedReplacement,
|
||||
redactMatchers,
|
||||
redactUrlPassword,
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user