Research Tools & Process¶
This page describes the automated research system and investigative methodology that produced the 16 studies in this series.
Investigative Stance¶
Each study is produced by an agent that functions as an investigator, not an advocate. This distinction governs every step of the process:
- Gather evidence comprehensively. Trace each commandment from Genesis to Revelation.
- Do not assume a conclusion before examining the evidence. The conclusion emerges FROM the evidence, not the reverse.
- State what the text says, not opinions about it. The agent does not use editorial characterizations like "genuine tension," "strongest argument," or "non-intuitive reading." It states what each passage says.
- Never use language like "irrefutable," "obviously," or "clearly proves." Use "the text states," "this is consistent with."
How the Studies Were Produced¶
Each study was generated by a three-agent pipeline called bible-study2, a Claude Code skill that answers Bible questions through tool-driven research. The pipeline ensures that:
- Scope comes from tools, not training knowledge. The AI does not decide which verses are relevant based on what it was trained on. Instead, tools search topical dictionaries, concordances, and semantic indexes to discover what Scripture says about the topic.
- Research and analysis are separated. The agent that gathers data is not the same agent that draws conclusions. This prevents confirmation bias.
- Every claim is traceable. Raw tool output is preserved in each study's
raw-data/folder, so every finding can be verified against its source.
The Three-Agent Pipeline¶
Phase 1: Scoping Agent
| Discovers topics, verses, Strong's numbers, related studies
| Writes PROMPT.md (the research brief)
Phase 2: Research Agent
| Reads PROMPT.md
| Retrieves all verse text, runs parallels, word studies, parsing
| Writes 01-topics.md, 02-verses.md, 04-word-studies.md
| Saves raw tool output to raw-data/
Phase 3: Analysis Agent
| Reads clean research files
| Applies the evidence classification methodology
| Writes 03-analysis.md and CONCLUSION.md
Why three agents instead of one?
- The scoping agent prevents training-knowledge bias. Scope comes from tool discovery, not from what the AI "knows" about theology.
- The research agent gets a fresh context window dedicated to data gathering. This maximizes the amount of data it can collect without running out of context.
- The analysis agent gets a fresh context window loaded with clean, organized research. This maximizes its capacity for synthesis and careful reasoning.
The Study Files¶
Each study directory contains these files, produced by the pipeline:
| File | Produced By | Contents |
|---|---|---|
PROMPT.md |
Scoping Agent | The research brief: tool-discovered topics, verses, Strong's numbers, related studies, and focus areas |
01-topics.md |
Research Agent | Nave's Topical Bible entries with all verse references for each topic |
02-verses.md |
Research Agent | Full KJV text for every verse examined, organized thematically |
04-word-studies.md |
Research Agent | Strong's concordance data: Hebrew/Greek words, definitions, translation statistics, verse occurrences |
raw-data/ |
Research Agent | Raw tool output archived by category (Strong's lookups, parsing, parallels, concept context, etc.) |
03-analysis.md |
Analysis Agent | Verse-by-verse analysis with full evidence classification applied |
CONCLUSION.md |
Analysis Agent | Evidence tables (E/N/I), tally, tally summary, and "What CAN/CANNOT Be Said" |
Data Sources¶
The tools draw from these primary data sources:
| Source | Description | Size |
|---|---|---|
| KJV Bible | Complete King James Version text | 31,102 verses |
| Nave's Topical Bible | Orville J. Nave's topical dictionary | 5,319 topics |
| Strong's Concordance | James Strong's exhaustive concordance with Hebrew/Greek lexicon | Every word in the KJV mapped to original language |
| BHSA (Biblia Hebraica Stuttgartensia Amstelodamensis) | Hebrew Bible linguistic database via Text-Fabric | Full morphological parsing of every Hebrew word |
| N1904 (Nestle 1904) | Greek New Testament linguistic database via Text-Fabric | Full morphological parsing of every Greek word |
| Textus Receptus | Byzantine Greek text tradition | For textual variant comparison |
| LXX Mapping | Septuagint translation correspondences | Hebrew-to-Greek word mappings |
| Sentence embeddings | Pre-computed semantic vectors | For semantic search across all sources |
Evidence Classification Methodology¶
The core of the methodology is a three-tier evidence classification system that distinguishes between what Scripture directly states, what necessarily follows from it, and what is inferred beyond the text.
The Three Tiers¶
E -- Explicit. "The Bible says X." You can point to a verse that says X. A close paraphrase of the actual words of a specific verse, with no concept, framework, or interpretation added beyond what the words themselves require.
N -- Necessary Implication. "The Bible implies X." You can point to verses that, when combined, force X with no alternative. Every reader from any theological position must agree this follows -- no additional reasoning is required.
I -- Inference. "Someone claims the Bible teaches X." No verse explicitly states X, and no combination of verses necessarily implies X. Something must be added beyond what the text contains.
Critical rule: Inferences cannot block explicit statements or necessary implications. If E and N items establish X, the existence of passages that could be inferred to teach not-X does not prevent X from being established. Those passages must be evaluated on their own terms.
The 4-Type Inference Taxonomy¶
Inferences are further classified on two dimensions:
| Derived from E/N | Not derived from E/N | |
|---|---|---|
| Aligns with E/N | I-A (Evidence-Extending) | I-C (Compatible External) |
| Conflicts with E/N | I-B (Competing-Evidence) | I-D (Counter-Evidence External) |
I-A (Evidence-Extending): Uses only vocabulary and concepts found in E/N statements. An inference only because it systematizes multiple E/N items into a broader claim. Strongest inference type.
I-B (Competing-Evidence): Some E/N statements support it, but other E/N statements appear to contradict it. Genuine textual tension where both sides can cite Scripture. Requires the SIS Resolution Protocol.
I-C (Compatible External): Reasoning from outside the text (theological tradition, philosophical framework, historical context) that does not contradict any E/N statement. Supplemental only.
I-D (Counter-Evidence External): External concepts that require overriding, redefining, or qualifying E/N statements to be maintained. Weakest inference type -- requires the text to mean something other than what it says.
Evidence hierarchy: E > N > I-A > I-B (resolved by SIS) > I-C > I-D
Category Classification¶
Since this series is expository (not debate), evidence items are classified by category instead of position:
- Commandment Scope -- What the commandment prohibits or requires (its literal and expanded meaning)
- Word Study -- Hebrew/Greek vocabulary analysis, definitions, semantic ranges
- Biblical Application -- How the commandment is applied in OT narratives, laws, prophets
- NT Treatment -- How Jesus, the apostles, and NT authors treat the commandment
- Theological Significance -- What the commandment reveals about God's character, human nature, or the divine-human relationship
- Cross-Commandment -- How this commandment connects to other commandments or broader biblical themes
Scripture-Interprets-Scripture (SIS) Protocol¶
Cross-referencing splits into two types:
#4a -- SIS with verified textual connection. When a clear passage interprets an unclear one, and the connection is verified (shared vocabulary, OT quotation, tool-verified parallel score, or the text itself establishes the connection), this is standard hermeneutics -- not an inference trigger. The connection must be documented.
#4b -- Cross-referencing without verified textual connection. When the reader must supply the link between passages, this is an inference trigger. The connection depends on the interpreter's judgment, not on the text itself.
Clarity criteria (what makes a passage "clearer"):
- Directness of vocabulary -- actual words vs. figurative language
- Genre -- didactic > apocalyptic > parabolic
- Scope -- universal statement > specific situation
- Frequency -- repeated across authors/testaments > single occurrence
- Self-interpretation -- when the text explains its own meaning
I-B Resolution Protocol¶
When an inference has competing textual support (I-B), a five-step SIS Resolution must be documented in the CONCLUSION:
- Identify tension. List E/N items FOR and AGAINST the claim.
- Assess clarity. Rate each E/N item: Plain / Contextually Clear / Ambiguous.
- Count and weigh. Plain statements outweigh Ambiguous ones (not a mere vote count).
- Apply SIS. Plain statements determine the reading of Ambiguous ones.
- State resolution: Strong (plain statements on one side, only ambiguous on the other) / Moderate (mix on dominant side) / Unresolved (substantial plain/contextually clear on both sides).
Verification Phase¶
After completing all evidence tables, the analysis agent runs a mandatory verification pass:
- Each E item is checked: does it directly quote or closely paraphrase actual verse text? Is it what the text says or what a position infers from it?
- Each N item is checked against all three N-tier tests. Items that fail are moved to Inferences.
- Each I item is checked with the source test (derived vs. external) and direction test (aligns vs. conflicts).
- Every I-A is checked: does it require only criterion #5 (systematizing)?
- Every I-B is checked: does it have E/N items on both sides?
- Every I-D is checked: does it override at least one E/N statement?
- Every #4a SIS connection is documented with shared vocabulary, OT quotation, or tool-verified parallel.
Tally Summary Format¶
Each CONCLUSION.md ends with:
- Explicit statements: [count]
- Necessary implications: [count]
- Inferences: [count]
- I-A (Evidence-Extending): [count]
- I-B (Competing-Evidence): [count] ([N] resolved, [M] unresolved)
- I-C (Compatible External): [count]
- I-D (Counter-Evidence External): [count]
Followed by What CAN Be Said (drawn from E and N tables) and What CANNOT Be Said (things the text does not directly state or necessarily imply).
Tool Descriptions¶
Topic & Verse Lookup¶
naves_semantic.py -- Semantic Topic Search¶
Finds relevant Nave's Topical Bible topics using semantic similarity (sentence embeddings), not just keyword matching. A query like "what does the Bible say about coveting" finds topics like COVETOUSNESS, CONTENTMENT, RICHES, and GREED even though those exact words may not appear in the query.
- Input: Natural language query
- Output: Ranked list of topics with relevance scores and key verse references
- Used by: Scoping agent to discover which topics are relevant to the research question
naves_db.py -- Nave's Topical Dictionary Query¶
Direct lookup of Nave's Topical Bible entries. Returns the full entry for a topic, including all subtopics and every verse reference Nave catalogued.
- Modes:
--topic "SABBATH"-- Full entry for a specific topic--search "commandment"-- Full-text search across all 5,319 topics--list-- List all available topics
- Used by: Research agent to retrieve complete topic entries with all verse references
kjv.txt -- KJV Bible Text¶
The complete King James Version in a simple searchable format (BookName Chapter:Verse[TAB]Text). Verses are retrieved using pattern matching (Grep tool).
- Format:
Genesis 1:1 In the beginning God created the heaven and the earth. - Used by: Research agent to retrieve the actual text of every verse discovered during scoping
Cross-Testament Parallels¶
cross_testament_parallels_v2.py -- Hybrid Parallel Finder¶
Finds parallel passages across testaments using a hybrid scoring system that combines:
- Semantic similarity -- sentence embedding comparison
- Keyword overlap -- shared significant terms
- Theological phrase matching -- recognized biblical phrases and allusions
For every verse studied, the tool is run in BOTH directions (--hybrid-ot and --hybrid-nt) to find parallels in both testaments, regardless of which testament the source verse is in.
- Input: A verse reference (e.g., "EXO 20:8")
- Output: Ranked list of parallel passages with composite scores
- Used by: Research agent to discover cross-references and OT/NT connections
Strong's Concordance & Word Studies¶
search_strongs.py -- Strong's Translation Database¶
Searches Strong's exhaustive concordance for Hebrew and Greek word data.
- Modes:
--lookup H7676-- All translations and verse counts for a Strong's number--lexicon H7676-- Word form, part of speech, full definition--verses H7676 "sabbath"-- Every verse where H7676 is translated as "sabbath"--lxx-map H7676-- How the Septuagint translates this Hebrew word into Greek--hebrew-source G4521-- Which Hebrew words underlie this Greek word
- Used by: Research agent for word studies: understanding how original-language words are used, their semantic range, and translation patterns
semantic_strongs.py -- Semantic Strong's Search¶
Finds Strong's concordance entries related to a concept using semantic similarity. Useful when you know the concept but not the specific Hebrew/Greek word.
- Input: Natural language description (e.g., "honor parents", "sabbath rest")
- Output: Ranked Strong's entries with relevance scores
- Options:
--hebrewfor Hebrew words only;--versesto include verse references - Used by: Scoping agent to discover which Strong's numbers are relevant to the research question
Hebrew Grammar Analysis¶
hebrew_parser.py -- Hebrew Morphological Parser¶
Parses the Hebrew Bible (BHSA via Text-Fabric) with full morphological analysis.
- Modes:
--verse "Exo 20:8"-- Full parsing: Hebrew text, lemmas, part of speech, stem (Qal/Niphal/Piel/etc.), tense (perfect/imperfect/participle/etc.), person, gender, number--clause "Exo 20:8"-- Clause structure: clause types (XQtl, xYqX, Wayq, NmCl), domains (Narrative vs. Discourse), phrase functions--construct "Gen 2:3"-- Analyze construct chains--lemma "שׁבת"-- Find every occurrence of a Hebrew lemma--search "sp=verb vs=qal vt=perf"-- Search by grammatical features
- Used by: Research agent for Hebrew word studies and grammar analysis
Greek Grammar Analysis¶
greek_parser.py -- Greek NT Morphological Parser¶
Parses the Greek New Testament (N1904 via Text-Fabric) with full morphological analysis.
- Modes:
--verse "MAT 5:17"-- Full parsing: Greek text, lemmas, Strong's numbers, tense, voice, mood, case, number, gender, person--clause "ROM 13:10"-- Clause structure analysis--lemma "πληρόω"-- Find every occurrence of a Greek lemma--search "mood=participle"-- Search by grammatical features
- Used by: Research agent for Greek word studies and grammar analysis
greek_text_compare.py -- Textual Variant Comparison¶
Compares the Nestle 1904 (N1904) critical text with the Textus Receptus.
- Modes:
--verse "MAT 5:17"-- Word-by-word comparison with differences highlighted--chapter "ROM 13"-- Full chapter comparison--stats-- Overall variant statistics across the NT
- Used by: Research agent when textual variants are relevant to interpretation
greek_parallel_passages.py -- NT Parallel Passage Finder¶
Finds parallel passages within the Greek New Testament, especially useful for Synoptic Gospel comparisons.
- Modes:
--find "MAT 5:17"-- Find parallel passages--compare "MAT 5" "MAR 12"-- Compare two chapters--synoptic-- Full Synoptic Gospel parallel analysis
- Used by: Research agent for finding related NT passages
Context Analysis¶
concept_context.py -- Theological Concept Context¶
Finds verses sharing the same theological concepts as a given verse, prioritized by contextual proximity (same chapter > same book > same author > other).
- Extracts theological concepts from a verse via its Strong's numbers (e.g., LAW, SABBATH, COMMANDMENT, HOLINESS)
- Finds other verses using those concepts, organized by expanding circles of context
- Scopes:
--scope chapter,--scope book,--scope author, or full Bible - Used by: Research agent to understand how an author or book uses a concept
query_verse_context.py -- Rich Contextual Query¶
Combines multiple analysis tools into a single contextual query.
- Modes:
- Default -- Full context analysis
--similar-- Find semantically similar verses--pericope-- Show the full pericope (passage unit)--grammar-- Grammatical pattern analysis--theme-- Chapter theme inference--theological-- Theological concept parallels--layer chapter/book/topics-- Specific context layer--strongs H7676-- Find all verses containing a Strong's number--topic "SABBATH"-- Find verses associated with a Nave's topic--author Paul-- Author statistics
- Used by: Research agent for multi-dimensional context analysis
Grammar Reference¶
semantic_grammar.py -- Grammar Textbook Search¶
Semantic search across 10 Hebrew and Greek grammar textbooks.
- Hebrew grammars: BDB (Brown-Driver-Briggs), Futato, BHSG (Basics of Biblical Hebrew Student Grammar), GKC (Gesenius-Kautzsch-Cowley), Waltke-O'Connor
- Greek grammars: Duff, Hudson, Machen, BDF (Blass-Debrunner-Funk), Wallace
- Options:
--hebrewor--greekto filter;--book gkcfor a specific textbook - Used by: Research agent to verify grammar claims against standard reference works
Study Index & Search¶
semantic_studies.py -- Existing Study Search¶
Finds previously completed studies related to a concept using semantic similarity.
- Input: Natural language query (e.g., "sabbath commandment", "moral law")
- Output: Study slug, title, question, relevance score, tags
- Used by: Scoping agent to discover what related research already exists
build_index.py / build_studies_index.py -- Index Rebuilder¶
Regenerates the master study index (INDEX.md) and semantic search embeddings after a study is completed.
- Used by: Analysis agent after writing CONCLUSION.md
Interpretive Principles¶
The analysis agent follows these principles when drawing conclusions from the gathered data:
- Scripture interprets Scripture. Clear passages determine the reading of unclear ones -- when the connection between passages is verified by shared vocabulary, OT quotation, or tool-verified parallel score. Using clear passages to interpret unclear ones is standard hermeneutics, not an inference.
- The plain meaning of words is explicit. "Only" means only. "Remember" means remember. "Holy" means set apart. These are not separate conclusions -- they are what the verses say. Do not classify the plain meaning of words as inference.
- Context determines meaning. Analyze words and phrases using expanding circles: verse > paragraph > chapter > book > same author > whole Bible. Context must match before assuming the same meaning across passages.
- Scripture is the only authority for doctrine. Church fathers, ancient Jewish sources, and denominational traditions are not authoritative. The question is always: "What does the Bible say?"
- The counter-claim to an explicit statement is often the real inference. If the text says "Remember the sabbath day, to keep it holy," the necessary implication is that the Sabbath is to be remembered. The inference is "this command was only for ancient Israel" -- which applies a qualifier the text does not state.
Web Supplement¶
In addition to the local tools, extended lexicon entries were occasionally retrieved from:
- Blue Letter Bible (blueletterbible.org) -- Extended Hebrew/Greek lexicon definitions when local Strong's data needed more detail