Skip to content

Research Tools & Process

This page describes the automated research system and investigative methodology that produced the 36 studies in this series.


Investigative Stance

Each study is produced by an agent that functions as an investigator, not an advocate. This distinction governs every step of the process:

  • Gather evidence from all sides. Each vision cycle is examined three times -- once from the Historicist perspective, once from the Preterist perspective, and once from the Futurist perspective. A fourth comparative study adjudicates between them.
  • Do not assume a conclusion before examining the evidence. The conclusion emerges FROM the evidence, not the reverse.
  • State what the text says, not opinions about it. The agent does not use editorial characterizations like "genuine tension," "strongest argument," or "non-intuitive reading." It states what each passage says and what each interpretive position infers from it.
  • Never use language like "irrefutable," "obviously," or "clearly proves." Use "the text states," "this is consistent with."

The Three-Position Comparative Methodology

This series examines Daniel's prophecies through three lenses systematically:

  1. Positional Studies (HIST/PRET/FUT): Each position gets a dedicated study per vision cycle where it presents its strongest case using the evidence classification framework. The agent investigates that position's reading with full access to the biblical text and research tools.

  2. Comparative Studies (COMPARE): After all three positions have been presented, a comparative study adjudicates between them using specification-match analysis -- how well does each position's reading match what the text actually specifies?

  3. Steelman Compilations: Studies 27-29 compile each position's complete case across ALL vision cycles, stress-testing the cumulative evidence.

  4. Grand Synthesis: Study 30 brings everything together with 399 classified evidence items across 126 inferences.


How the Studies Were Produced

Each study was generated by a multi-agent pipeline, a Claude Code skill that answers Bible questions through tool-driven research. The pipeline ensures that:

  • Scope comes from tools, not training knowledge. The AI does not decide which verses are relevant based on what it was trained on. Instead, tools search topical dictionaries, concordances, and semantic indexes to discover what Scripture says about the topic.
  • Research and analysis are separated. The agent that gathers data is not the same agent that draws conclusions. This prevents confirmation bias.
  • Every claim is traceable. Raw tool output is preserved in each study's raw-data/ folder, so every finding can be verified against its source.

The Multi-Agent Pipeline

Phase 1: Scoping Agent
   | Discovers topics, verses, Strong's numbers, related studies
   | Writes PROMPT.md (the research brief)

Phase 2: Research Agent
   | Reads PROMPT.md
   | Retrieves all verse text, runs parallels, word studies, parsing
   | Writes 01-topics.md, 02-verses.md, 04-word-studies.md
   | Saves raw tool output to raw-data/

Phase 3: Analysis Agent
   | Reads clean research files
   | Applies the evidence classification methodology
   | Writes 03-analysis.md and CONCLUSION.md

Phase 4: Validation Agent(s)
   | Reads CONCLUSION.md against the biblical text
   | Produces position-specific validation reports
   | Identifies misclassifications, unsupported claims, missing evidence

Why multiple agents?

  • The scoping agent prevents training-knowledge bias. Scope comes from tool discovery, not from what the AI "knows" about theology.
  • The research agent gets a fresh context window dedicated to data gathering. This maximizes the amount of data it can collect without running out of context.
  • The analysis agent gets a fresh context window loaded with clean, organized research. This maximizes its capacity for synthesis and careful reasoning.
  • The validation agent(s) provide independent quality control, checking each position's evidence claims against the actual text.

The Study Files

Each study directory contains these files, produced by the pipeline:

File Produced By Contents
PROMPT.md Scoping Agent The research brief: tool-discovered topics, verses, Strong's numbers, related studies, and focus areas
01-topics.md Research Agent Nave's Topical Bible entries with all verse references for each topic
02-verses.md Research Agent Full KJV text for every verse examined, organized thematically
04-word-studies.md Research Agent Strong's concordance data: Hebrew/Greek words, definitions, translation statistics, verse occurrences
raw-data/ Research Agent Raw tool output archived by category (Strong's lookups, parsing, parallels, etc.)
03-analysis.md Analysis Agent Verse-by-verse analysis with full evidence classification applied
CONCLUSION.md Analysis Agent Evidence tables (E/N/I), tally, and final assessment
*-validation.md Validation Agent Position-specific validation checking evidence claims against the text

Data Sources

The tools draw from these primary data sources:

Source Description Size
KJV Bible Complete King James Version text 31,102 verses
Nave's Topical Bible Orville J. Nave's topical dictionary 5,319 topics
Strong's Concordance James Strong's exhaustive concordance with Hebrew/Greek lexicon Every word in the KJV mapped to original language
BHSA (Biblia Hebraica Stuttgartensia Amstelodamensis) Hebrew Bible linguistic database via Text-Fabric Full morphological parsing of every Hebrew word
N1904 (Nestle 1904) Greek New Testament linguistic database via Text-Fabric Full morphological parsing of every Greek word
Textus Receptus Byzantine Greek text tradition For textual variant comparison
LXX Mapping Septuagint translation correspondences Hebrew-to-Greek word mappings
Sentence embeddings Pre-computed semantic vectors For semantic search across all sources

Position Databases: How Each View Gets a Fair Hearing

A distinctive feature of this series is that each interpretive position has its own dedicated argument database -- a curated collection of that position's strongest arguments, verified against its own scholarly tradition. These databases ensure that no position is straw-manned or under-represented.

Historicist Position Database (942 arguments)

The historicist database represents the traditional Protestant reading of Daniel as continuous prophecy spanning from Babylon to the Second Coming. Its arguments were verified against Ellen White's writings, SDA pioneer scholarship (Uriah Smith, William Miller, Josiah Litch, S.N. Haskell), LeRoy Froom's Prophetic Faith of Our Fathers, and Stephen Bohr's study notes.

The database contains textual/grammatical arguments (e.g., the gadal/yether progression requiring the little horn to exceed both Persia and Greece), vocabulary chains binding Daniel's chapters into a unified prophetic corpus (e.g., the biyn chain across Dan 8-12), cross-references to Revelation (186 catalogued allusions), day-year principle evidence (13 supporting lines), and documented historical fulfillments.

Preterist Position Database (412 arguments)

The preterist database represents the reading of Daniel's prophecies as referring primarily to the Maccabean crisis of 167-164 BC. Sources include Jerome's Commentary on Daniel (preserving Porphyry's 3rd-century arguments -- the earliest systematic preterist case), Albert Barnes, John Calvin, Matthew Henry, and modern critical scholars (Collins, Goldingay, Kitchen).

The database contains dating/composition arguments (linguistic evidence, Dead Sea Scrolls), Antiochus IV identification evidence, Dan 11:1-35 verse-by-verse Ptolemaic-Seleucid correspondences (the preterist position's strongest section), literary genre arguments, and the position's own acknowledged weaknesses (Dan 11:40-45 problems, everlasting kingdom language, gadal/yether progression).

Futurist Position Database (466 arguments)

The futurist database represents the dispensationalist reading that inserts a gap between Daniel's 69th and 70th weeks and places the climactic fulfillment in a future tribulation. Sources include J.N. Darby's Synopsis (the origin of dispensationalism), John Walvoord, J. Dwight Pentecost, Thomas Ice, Harold Hoehner, and Robert Anderson.

The database contains gap/parenthesis arguments, revived-Rome theory, future Antichrist identification, type/antitype reasoning (Antiochus as historical type), literal time-period arguments, the Israel/Church distinction, Third Temple evidence, and counter-arguments to historicist and preterist readings.

How the Databases Are Used

The databases serve two critical functions in the study pipeline:

  1. Prompt Review (Phase 2.5): Before the research agent runs, a reviewer checks whether the research scope covers the arguments each position's database expects for that chapter. Missing arguments are added as research directives. This prevents the study from accidentally omitting a position's key claims.

  2. Position Validation (Phase 5): After the analysis is written, dedicated validators check the study against each position's database. The validator asks: Is each argument accurately represented? Is it present, missing, or misrepresented? Are the evidence classifications correct? This catches both straw-manning (weakening a position's case) and over-claiming (classifying evidence at a higher tier than warranted).

The databases are the authority on what each position holds. If the database says the historicist position argues X, and the study attributes Y to historicism, that is a misrepresentation -- even if the validator's own training knowledge thinks Y is defensible. This constraint keeps the investigation honest.


Evidence Classification Methodology

The core of the methodology is a three-tier evidence classification system that distinguishes between what Scripture directly states, what necessarily follows from it, and what positions claim it implies.

The Three Tiers

E -- Explicit. "The Bible says X." You can point to a verse that says X. A close paraphrase of the actual words of a specific verse, with no concept, framework, or interpretation added beyond what the words themselves require.

N -- Necessary Implication. "The Bible implies X." You can point to verses that, when combined, force X with no alternative. Every reader from any theological position must agree this follows -- no additional reasoning is required.

I -- Inference. "A position claims the Bible teaches X." No verse explicitly states X, and no combination of verses necessarily implies X. Something must be added beyond what the text contains.

Critical rule: Inferences cannot block explicit statements or necessary implications. If E and N items establish X, the existence of passages that could be inferred to teach not-X does not prevent X from being established.


The 4-Type Inference Taxonomy

Inferences are further classified on two dimensions:

Derived from E/N Not derived from E/N
Aligns with E/N I-A (Evidence-Extending) I-C (Compatible External)
Conflicts with E/N I-B (Competing-Evidence) I-D (Counter-Evidence External)

I-A (Evidence-Extending): Uses only vocabulary and concepts found in E/N statements. An inference only because it systematizes multiple E/N items into a broader claim. Strongest inference type.

I-B (Competing-Evidence): Some E/N statements support it, but other E/N statements appear to contradict it. Genuine textual tension where both sides can cite Scripture. Requires the SIS Resolution Protocol.

I-C (Compatible External): Reasoning from outside the text (theological tradition, philosophical framework, historical context) that does not contradict any E/N statement. Supplemental only.

I-D (Counter-Evidence External): External concepts that require overriding, redefining, or qualifying E/N statements to be maintained. Weakest inference type.

Evidence hierarchy: E > N > I-A > I-B (resolved by SIS) > I-C > I-D


Positional Classification

In this comparative series, evidence items are classified by which position they support (HIST, PRET, FUT, or Neutral/Shared). Items are classified positionally only when one position must deny the textual observation. Factual observations that all positions must accept are classified Neutral regardless of which side cites them.

Read the Full Methodology