Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1.41k
2.2k

Policy & Grants Decision Simulator (Synthetic)

Overview

This repository provides fully synthetic datasets and analysis outputs generated by the Policy & Grants Decision Simulator (v1.0) — a Python-based simulation engine that models end-to-end government funding, grants, and acquisition workflows.

The simulator captures:

  • vendor participation and self-selection,
  • review and compliance phases,
  • policy and regulatory “knobs,”
  • administrative burden and delays,
  • award and non-award outcomes.

The datasets are designed for comparative policy analysis, not prediction.

All data is entirely synthetic. No real vendors, proposals, agencies, or individuals are represented.

GitHub: https://github.com/DBbun/policy-grants-decision-simulator


How the Data Is Generated (Conceptual)

Each run simulates many funding episodes. For each episode:

  1. A policy configuration is sampled.
  2. A population of vendors is sampled.
  3. Vendors decide whether to bid.
  4. Submitted proposals flow through multiple review phases.
  5. Time, attrition, and outcomes are recorded.

Monte-Carlo repetition produces distributions suitable for stress testing and sensitivity analysis.


Configuration Inputs (from config_snapshot.json)

Every dataset includes a machine-readable snapshot of the configuration used to generate it.

Top-level simulation parameters

  • runs – number of Monte Carlo runs
  • episodes_per_run – funding opportunities per run
  • vendors_per_episode – min/max vendors sampled
  • random_seed – reproducibility seed

Policy knobs (examples)

  • documentation_burden – relative paperwork / compliance load
  • sb_preference – weighting favoring small businesses
  • oem_direct_adoption – degree of direct OEM engagement
  • export_cui_strictness – data/control restrictions
  • research_security_strictness – security review intensity

Knobs are sampled within configured ranges and may be perturbed for sensitivity analysis.


Dataset Contents

1. Core Simulation Outputs (output/)

policy_runs.csv

One row per Monte-Carlo run

Column Description
run_id Unique run identifier
policy_tag Baseline or perturbed policy label
avg_award_days Mean time-to-award (days)
sb_win_rate Fraction of awards to small businesses
avg_admin_load Synthetic administrative burden proxy
awards Number of awards
no_awards Episodes ending without award

decision_traces.csv

Event-level log (multiple rows per episode)

Column Description
run_id Run identifier
episode_id Funding episode ID
vendor_id Vendor identifier (synthetic)
actor vendor or system
phase Process phase (see below)
action Action taken (e.g., bid, no_bid, advance)
reason_code Attrition / disqualification reason
duration_days Time spent in phase
is_small_business Boolean
p_value Probability used for stochastic decision

Phases include:
BID_DECISION, SUBMIT, COMPLIANCE_SCREEN, TECH_REVIEW,
COST_REALISM, RISK_REVIEW, AWARD, POST


phase_bottlenecks.csv

Aggregated phase-level timing statistics

Column Description
phase Process phase
share_of_total_days Fraction of total process time
mean_days Mean duration
p90_days 90th percentile duration

run_summary_by_knob.csv

Sensitivity analysis results

Column Description
knob Policy knob name
direction plus / minus
delta_avg_award_days Change vs baseline
delta_sb_win_rate Change vs baseline
ci95_low_* 95% CI lower bound
ci95_high_* 95% CI upper bound

Negative deltas indicate improvement (e.g., faster awards).


assumptions.csv

Static modeling assumptions

Column Description
assumption Assumption name
value Configured value
notes Interpretation

config_snapshot.json

Full configuration used to generate the dataset, enabling exact reproduction.


Process Phases (Detailed Semantics)

Phases represent abstracted functional steps commonly found in government grants and acquisition workflows.
They are conceptual, not tied to any specific agency or regulation.

BID_DECISION

Who: Vendor
Purpose: Vendor self-selection

Models whether a vendor chooses to engage at all.

Drivers:

  • expected payoff,
  • documentation burden,
  • capacity constraints,
  • policy preferences,
  • stochastic noise.

Outcomes:

  • bid
  • no_bid → vendor dropout

This phase is the primary driver of early attrition.


SUBMIT

Who: Vendor
Purpose: Proposal preparation and submission

Duration increases with:

  • documentation burden,
  • compliance complexity,
  • limited vendor capacity.

Outcomes:

  • successful submission,
  • dropout during preparation.

COMPLIANCE_SCREEN

Who: System
Purpose: Administrative and eligibility screening

Models early checks such as completeness and eligibility.
Introduces short but variable delays and early disqualification risk.


TECH_REVIEW

Who: System
Purpose: Technical merit evaluation

Represents expert or panel review of technical quality.

Often contributes the largest share of total processing time.


COST_REALISM

Who: System
Purpose: Cost and budget evaluation

Models assessment of budget realism and cost sharing.
Higher scrutiny can increase delays and disproportionately affect smaller vendors.


RISK_REVIEW

Who: System
Purpose: Programmatic and compliance risk assessment

Abstracts security, data-control, and broader programmatic risks.
Strongly influenced by policy strictness.


AWARD

Who: System
Purpose: Selection and award processing

Final selection and award preparation.
Critical for final outcome and small-business win rate.


POST

Who: System
Purpose: Post-award administrative processing

Optional phase representing notifications, setup, and closeout.


Phase Timing Interpretation

  • Phase durations are relative, not absolute.
  • Absolute day counts should not be interpreted as real timelines.
  • Comparative statements (e.g., “TECH_REVIEW dominates total time”) are meaningful.

Phase Interaction with Policy Knobs

Policy Knob Primary Affected Phases
documentation_burden BID_DECISION, SUBMIT, COMPLIANCE_SCREEN
sb_preference BID_DECISION, AWARD
oem_direct_adoption BID_DECISION, COST_REALISM
export_cui_strictness RISK_REVIEW
research_security_strictness TECH_REVIEW, RISK_REVIEW

Analysis Outputs (figures/)

Generated by analyze_results_v1.0.py.

Includes:

  • distribution histograms,
  • tradeoff plots,
  • bottleneck and Pareto diagnostics,
  • attrition analysis with explicit denominators,
  • sensitivity plots for policy knobs.

A human-readable REPORT.md summarizes the main findings.


Interpretation Notes

  • Rates use explicit denominators (per vendor decision, per episode).
  • Dropout buckets are inferred, not observed.
  • Metrics are comparative, not predictive.
  • Results should not be interpreted as real-world performance estimates.

Intended Uses

Allowed without additional permission

  • academic research,
  • policy exploration,
  • internal evaluation and benchmarking,
  • education and training,
  • visualization and methods development.

Requires a separate license

  • operational or production use,
  • integration into live decision-support systems,
  • use in real policy or procurement decisions,
  • commercial products or consulting,
  • government agency deployment beyond exploratory analysis.

License

Research & Evaluation Use (Proprietary)

This dataset and all derived figures are fully downloadable for research and evaluation.

Commercial use, operational deployment, or use by government agencies for real decision-making requires a separate paid license.

Licensing inquiries:
[email protected]


Disclaimer

This dataset is entirely synthetic and does not represent any real agency, program, vendor, or procurement process.
Any resemblance to real systems is conceptual and coincidental.

Downloads last month
20