This is an observation about the article rather than a comment about its conclusions.
ChatGPT's description of Intelligent Design when you asked for it to describe it from the point of view of someone who believes in ID was almost incredible. I could see in many places where it was essentially quoting from you and Steve Meyer. Yet, it was so well done that I couldn't have done any better given hours to do so.
Thanks Tim. It is a remarkable technology. I give myself some credit here in formulating the prompt that elicited its description of ID from the point of view of an ID proponent, and also in giving it the context of having it first spout the party line about ID not being science. But what's impressive here is what it did with that prompt and context. It gives the impression of significant competence in understanding ID, both from the vantage of critics and from the vantage of proponents. Hope you're well. It would be nice to get together again. --Bill
This is excellent and very helpful as a primer on what philosophers of science say, something my training in biochemistry seriously lacked. My late father was a mathematician and he used to repeat ad nauseam that universities should require calculus-based statistics courses for all science and engineering majors. I’m starting to sound like him with regards to philosophy of science courses, too. Both would certainly be humbling for many.
Thanks also for engaging with Levin. Mathematical Platonists are fascinating. Have you engaged much with Penrose’s attempt to answer the causal impotence objection? To me, it sounds more like evasion dressed up in poetics than a successful defeater - a clever use of mind to avoid Mind. Still, that he admits the unreasonable effectiveness of mathematics is a huge concession from one of the most important living scientists.
Brilliant framing of the Levin divergence through philsophy of science lenses. The part where patterns alone can't impose themselves into matter without an acutal selecting cause really clarifies why his Platonic space feels underpowered. I've worked with systems that look engineered, and there's always a traceable design input, not just a cool matemtical correlation.
Nicely put! "Underpowered" captures very much the sense I have with Levin and The Third Way people in general. They've got good intuitions about what the key problems are, but the causal powers they are willing to invoke to resolve those problems are insufficient.
The QED Model proposes a framework for understanding the emergence of life, consciousness, and human uniqueness through the lens of quantum principles and intentional divine action. In considering the history of life on Earth, the 'bushes of life' analogy illustrates a pattern of relatively abrupt appearances of major biological lineages (phyla), rather than a purely gradual branching from a single trunk. This view emphasizes that the fossil record reveals distinct 'bushes' representing these major groups, with limited evidence of transitional forms demonstrating a smooth, step-by-step evolutionary ascent through purely naturalistic mechanisms.
The QED Model posits that the emergence of new, higher phyla, particularly during events like the Cambrian Explosion, presents a significant challenge to standard evolutionary theory. We contend that the inherent limitations of mutation and natural selection, especially within the constraints of geological timescales, have not adequately demonstrated the capacity to generate entirely novel body plans characteristic of new phyla. Therefore, the QED Model proposes that the abrupt appearance of these distinct, foundational 'bushes' - each representing a separate and unique body plan - necessitates the influence of the Designer. This intervention was triggered when a planetary bifurcation threshold was met. This was not a random evolutionary step but a pre-programmed, irreversible 'system state' change, guided by the simulation's inherent design. The record, with its lack of continuous transitional fossils, suggests these new 'bushes' were introduced as separate, complete units, forming a "grove" rather than a single branching tree.
The emergence of Homo sapiens at what we term the 'Genesis Point' also stands apart. Rather than just another branch on these preceding 'bushes,' Homo sapiens represents a unique 'sprout' - a qualitatively distinct lineage characterized by a remarkable confluence of genetic changes and unparalleled cognitive and behavioral capacities. This event reflects a more direct or intensified influence from the Designer's pre-programmed design. While the diversification within the established 'bushes' over time might exhibit patterns consistent with pre-programmed guided or even overtly coerced natural processes under the Designer's orchestration, the origin of the 'bushes' themselves - meaning the abrupt appearance of entirely new foundational body plans or higher taxonomic groups like phyla - and other major 'sprouting' events strongly suggest a more direct intervention or a fundamental aspect of the Designer's initial 'programming' of the 'biological simulation,' often coinciding with or shortly following significant environmental shifts or catastrophic events. The QED Model interprets this moment as a "sprouting" of a fundamentally new blueprint - a kernel-level rewrite that distinguishes Homo sapiens not merely by degree, but by kind.
I’d be interested to know if the net-negative response from the various LLM’s is actually a programmed bias or simply a product of the predictive algorithm which has been overwhelmed with net-negative responses (unintended bias).
Interesting point. I raised your objection to ChatGPT. Here is its response, which rejects the programmed bias hypothesis:
It isn’t an “explicit bias switch” where I’m programmed to discredit intelligent design; what you’re seeing is mostly the product of how my training encourages me to mirror the balance of credible, mainstream sources on contested questions and to apply common demarcation criteria used in contemporary philosophy of science and working scientific communities. Because the dominant peer-reviewed scientific consensus treats ID as not yet meeting the usual expectations for a mature theory (mechanism-rich explanations, distinctive risky predictions, broad research uptake), my safest default is to report that mainstream assessment while also flagging how proponents frame ID as an inference-based historical science. That can look like bias, but it’s better described as distributional or consensus-weighted output: if the training data contains far more high-status academic critiques than endorsements, the “center of mass” of likely completions will lean negative unless the user requests a proponent’s case or a different evaluative framework. In other words: it’s largely emergent from prediction under norms about deference to scientific consensus, not a bespoke rule to “downrank ID.”
This phrase sticks out to me, "described as distributional or consensus-weighted output.” That seems like the most plausible answer as most programmers have not likely thought deeply in this area, but the general worldview bias that favors the consensus is really at play. Thanks for digging deeper on this one. Much appreciated.
I am not sure this actually moves the discussion forward. It appears to be a restatement of what has been said repeatedly by the Author and others for the last 30+ years. Most of the non-LLM content could itself have been generated word-for-word by any LLM that has been asked to write an SA in the style of the author, right down to the misunderstanding (or perhaps misrepresentation) of the concept and application of Popper's falsifiability, and the trotting out of a mix of impactless intelligent design articles and articles not about intelligent design as evidence of the extensive reach of intelligent design within science. I'm not going to address Levin's work -- he is doing some exquisite biology, using all the standard scientific tools, rigorously testing assumptions and stressing the predictions of his models. It is something of a travesty to speak of his scientific rigour in the same breath as intelligent design.
Give me something new. Give me something that actually works, in practice, when I do biology in a lab. Or write the same SA yet again, I guess.
I mean, I think I already did. There are a lot of words in the article, but they largely are the exact same case in favour of ID that has been used for the past 30 years, unchanged, and with the same very well-trod objections. ID fundamentally fails at being a scientific proposition because it comes furnished with no mechanism to apply the internal logic of ID to be able to reject claims of design. There is no way to use the necessary properties of intelligent design to reject a claim that something was intelligently designed if in fact it was not the product of design. You could claim that absolutely every single thing is intelligently designed, and there is no way to apply intelligent design to categorically filter out things that were not. This means that in both a logical but also a statistical sense, it is impossible to then build up any evidence in favour of intelligent design, because if your explanation is compatible with all claims, it predicts none.
I've said it till the cows come home that false negatives are not a problem for ID -- if design detection methods fail to confirm design, the presumption is non-design (unless one's metaphysics, such as Aristotle's, suggests everything has purpose -- but even here a Thomist could distinguish between primary and secondary causation, with nature operating by secondary causes but unable to bridge certain gaps -- Thomas would have argued that the resurrection was beyond the capacity of nature). In any case, the naturalist faces the same problem that anything might be designed but we simply don't have evidence for it. A coin lands heads. Was it intentional that it should land heads? Barring direct evidence of an intelligence acting thus, it might still be the result of intelligence (for both the naturalist and the non-naturalist). What makes ID scientific is that we can ascertain that certain events give clear marks of intelligence -- it's irrelevant that other things lack such marks but still may be designed. To your point about mechanism, there's plenty in science that is not explained in terms of mechanism. As for a mechanism for applying the ID logic, the better term would be a "methodology," and there is a well-developed methodology for identifying the effects of intelligence -- see, for instance, chapter 6 of the second edition of THE DESIGN INFERENCE. Your criticisms to me sound muddled and it doesn't seem that you have carefully read our literature.
It has been a decade since I was learning or teaching this stuff, so I've worked with claude to nail it down. We go from the beginning, feel free to skip directly to "The Unfalsifiable Case". In this document, you can read A as design, and B as the appropriate scientific model, for example, evolution in the case of biological structure.
BAYESIAN ANALYSIS OF UNFALSIFIABLE EXPLANATIONS
Setup: We have data X and two competing models A and B.
Definitions:
• P(A), P(B) = prior probabilities (initial credence in each model before observing data)
• P(X|A), P(X|B) = likelihoods (probability of observing data X given each model is true)
• P(A|X), P(B|X) = posterior probabilities (updated credence in each model after observing X)
Derivation of Bayes' Theorem:
The fundamental probability relation states that the joint probability is order-independent:
The posterior odds equal the likelihood ratio times the prior odds.
The Unfalsifiable Case:
Consider model A that is unfalsifiable—it assigns equal probability to all possible observations. If there are N mutually exclusive possible datasets, then:
P(X₁|A) = P(X₂|A) = ... = P(Xₙ|A) = 1/N
This is a uniform distribution over the data space.
The relative posterior support for A between different datasets depends only on model B's predictions, not on A's. Model A contributes no discriminatory power.
Additionally, as N → ∞ (continuous limit), the likelihood P(X|A) → 0, and A suffers a Bayesian penalty for spreading its probability mass uniformly across all possibilities—lack of precision is penalized.
Conclusion:
An unfalsifiable explanation that assigns equal probability to all possible datasets cannot gain differential evidential support. While the absolute posterior P(A|X) may vary depending on what competing model B predicts, the ratio of posteriors across different datasets is determined entirely by B's discriminatory power, not A's.
Put simply: a model that "explains everything" equally makes no predictions. It cannot accumulate evidence in its favor relative to competing explanations because it fails to discriminate between different possible observations. Evidence requires differential prediction.
False negatives are no problem for ID if you are content for it to not be eligible for inclusion within science, and remain purely a faith-based religious belief, like the Resurrection of Christ. If that is your intent, then that is fine. But if you are determined that ID garner scientific evidence and be incorporated into our scientific knowledge then it is a show-stopper issue.
"A coin lands heads. Was it intentional that it should land heads?"
This is not a scientific question. Science doesn't deal with intent. It deals with what happens and how it happens. I can intend for a coin to land heads from here until next week, but the scientific treatment of the physics of coins falling is unaffected by my intent.
"What makes ID scientific is that we can ascertain that certain events give clear marks of intelligence -- it's irrelevant that other things lack such marks but still may be designed."
This is a perfectly good rhetorical argument, but it is useless as a scientific one. You can't scientifically compare the evidence for or against competing explanations if one of them lacks a falsification criteria, or in the case of a more Bayesian statistical treatment of evidence, an informative probability of observations conditioned upon the explanation. If nothing is inconsistent with an explanation, then nothing is evidence for it.
Have you held the second edition of The Design Inference in your hands? Are you aware of how we develop Bayesian statistics in favor of the design inference? Where would you be without Claude?
I have not read the 2nd edition, only summaries and overviews and spin-off blogs. I'll see if I can track down a copy.
But I don't see how you can, even in principle, construct a bayesian model that develops evidence in favour of intelligent design over an alternative explanation when you lack an informative measure of the likelihood term for the data conditioned on design. I have seen the various ways to estimate some probability of the data conditioned on a non-design process, and then claiming this falls below some threshold so can be rejected, but this is a really fundamental violation of the valid way of applying statistics for choosing between alternatives. You have to look at the relative odds of your explanations, not probabilities for a given explanation. And so we get back to there being no way to get at the odds for the data conditioned on the design explanation.
Without Claude, I would have written essentially the same text/maths, but it would have taken me half a day to remember it all, and double-check it for typos, and get someone else to verify I had not messed up in some embarrassing way. In either event, we reach the same end-point, that when comparing explanations, you can't gain evidence for or confidence in explanations that explain everything and rules out nothing.
I think that we must develop new arguments for the viewpoint that Intelligent Design is indispensable to the causal ontology of our best scientific theories.
The difficulty here is that many of our best modern scientific theories aren't causal. They are predictive, in that they tell you if you observe this you will observe that, but for various definitions of "cause", they lack that. Or they have it only as a narrative convenience, or as a proxy for the intent of the experimentalist. This was clear in 1913 when Russell wrote "On the Notion of Cause", and is ever more so today with our massive advances in our understanding of mathematics and computation.
This is an observation about the article rather than a comment about its conclusions.
ChatGPT's description of Intelligent Design when you asked for it to describe it from the point of view of someone who believes in ID was almost incredible. I could see in many places where it was essentially quoting from you and Steve Meyer. Yet, it was so well done that I couldn't have done any better given hours to do so.
I'm still surprised by this.
Thanks Tim. It is a remarkable technology. I give myself some credit here in formulating the prompt that elicited its description of ID from the point of view of an ID proponent, and also in giving it the context of having it first spout the party line about ID not being science. But what's impressive here is what it did with that prompt and context. It gives the impression of significant competence in understanding ID, both from the vantage of critics and from the vantage of proponents. Hope you're well. It would be nice to get together again. --Bill
Will do!
This remarkable essay could form the core of a new Dembski book.
This is excellent and very helpful as a primer on what philosophers of science say, something my training in biochemistry seriously lacked. My late father was a mathematician and he used to repeat ad nauseam that universities should require calculus-based statistics courses for all science and engineering majors. I’m starting to sound like him with regards to philosophy of science courses, too. Both would certainly be humbling for many.
Thanks also for engaging with Levin. Mathematical Platonists are fascinating. Have you engaged much with Penrose’s attempt to answer the causal impotence objection? To me, it sounds more like evasion dressed up in poetics than a successful defeater - a clever use of mind to avoid Mind. Still, that he admits the unreasonable effectiveness of mathematics is a huge concession from one of the most important living scientists.
Brilliant framing of the Levin divergence through philsophy of science lenses. The part where patterns alone can't impose themselves into matter without an acutal selecting cause really clarifies why his Platonic space feels underpowered. I've worked with systems that look engineered, and there's always a traceable design input, not just a cool matemtical correlation.
Nicely put! "Underpowered" captures very much the sense I have with Levin and The Third Way people in general. They've got good intuitions about what the key problems are, but the causal powers they are willing to invoke to resolve those problems are insufficient.
The QED Model proposes a framework for understanding the emergence of life, consciousness, and human uniqueness through the lens of quantum principles and intentional divine action. In considering the history of life on Earth, the 'bushes of life' analogy illustrates a pattern of relatively abrupt appearances of major biological lineages (phyla), rather than a purely gradual branching from a single trunk. This view emphasizes that the fossil record reveals distinct 'bushes' representing these major groups, with limited evidence of transitional forms demonstrating a smooth, step-by-step evolutionary ascent through purely naturalistic mechanisms.
The QED Model posits that the emergence of new, higher phyla, particularly during events like the Cambrian Explosion, presents a significant challenge to standard evolutionary theory. We contend that the inherent limitations of mutation and natural selection, especially within the constraints of geological timescales, have not adequately demonstrated the capacity to generate entirely novel body plans characteristic of new phyla. Therefore, the QED Model proposes that the abrupt appearance of these distinct, foundational 'bushes' - each representing a separate and unique body plan - necessitates the influence of the Designer. This intervention was triggered when a planetary bifurcation threshold was met. This was not a random evolutionary step but a pre-programmed, irreversible 'system state' change, guided by the simulation's inherent design. The record, with its lack of continuous transitional fossils, suggests these new 'bushes' were introduced as separate, complete units, forming a "grove" rather than a single branching tree.
The emergence of Homo sapiens at what we term the 'Genesis Point' also stands apart. Rather than just another branch on these preceding 'bushes,' Homo sapiens represents a unique 'sprout' - a qualitatively distinct lineage characterized by a remarkable confluence of genetic changes and unparalleled cognitive and behavioral capacities. This event reflects a more direct or intensified influence from the Designer's pre-programmed design. While the diversification within the established 'bushes' over time might exhibit patterns consistent with pre-programmed guided or even overtly coerced natural processes under the Designer's orchestration, the origin of the 'bushes' themselves - meaning the abrupt appearance of entirely new foundational body plans or higher taxonomic groups like phyla - and other major 'sprouting' events strongly suggest a more direct intervention or a fundamental aspect of the Designer's initial 'programming' of the 'biological simulation,' often coinciding with or shortly following significant environmental shifts or catastrophic events. The QED Model interprets this moment as a "sprouting" of a fundamentally new blueprint - a kernel-level rewrite that distinguishes Homo sapiens not merely by degree, but by kind.
I have a working model on design i would love for you to comment on if I could send it to you.
Why don't you contact me through my blog, billdembski.com (contact page) -- send me a synopsis there of your model. Thanks!
Did you receive it? I would love to hear your thoughts?
Thanks for the prod. It got put in spam. I'll look at it and hope to be in touch early next week by email.
Tyvm!
I’d be interested to know if the net-negative response from the various LLM’s is actually a programmed bias or simply a product of the predictive algorithm which has been overwhelmed with net-negative responses (unintended bias).
Interesting point. I raised your objection to ChatGPT. Here is its response, which rejects the programmed bias hypothesis:
It isn’t an “explicit bias switch” where I’m programmed to discredit intelligent design; what you’re seeing is mostly the product of how my training encourages me to mirror the balance of credible, mainstream sources on contested questions and to apply common demarcation criteria used in contemporary philosophy of science and working scientific communities. Because the dominant peer-reviewed scientific consensus treats ID as not yet meeting the usual expectations for a mature theory (mechanism-rich explanations, distinctive risky predictions, broad research uptake), my safest default is to report that mainstream assessment while also flagging how proponents frame ID as an inference-based historical science. That can look like bias, but it’s better described as distributional or consensus-weighted output: if the training data contains far more high-status academic critiques than endorsements, the “center of mass” of likely completions will lean negative unless the user requests a proponent’s case or a different evaluative framework. In other words: it’s largely emergent from prediction under norms about deference to scientific consensus, not a bespoke rule to “downrank ID.”
This phrase sticks out to me, "described as distributional or consensus-weighted output.” That seems like the most plausible answer as most programmers have not likely thought deeply in this area, but the general worldview bias that favors the consensus is really at play. Thanks for digging deeper on this one. Much appreciated.
I am not sure this actually moves the discussion forward. It appears to be a restatement of what has been said repeatedly by the Author and others for the last 30+ years. Most of the non-LLM content could itself have been generated word-for-word by any LLM that has been asked to write an SA in the style of the author, right down to the misunderstanding (or perhaps misrepresentation) of the concept and application of Popper's falsifiability, and the trotting out of a mix of impactless intelligent design articles and articles not about intelligent design as evidence of the extensive reach of intelligent design within science. I'm not going to address Levin's work -- he is doing some exquisite biology, using all the standard scientific tools, rigorously testing assumptions and stressing the predictions of his models. It is something of a travesty to speak of his scientific rigour in the same breath as intelligent design.
Give me something new. Give me something that actually works, in practice, when I do biology in a lab. Or write the same SA yet again, I guess.
Hmmm, why not address the author’s claims?
I mean, I think I already did. There are a lot of words in the article, but they largely are the exact same case in favour of ID that has been used for the past 30 years, unchanged, and with the same very well-trod objections. ID fundamentally fails at being a scientific proposition because it comes furnished with no mechanism to apply the internal logic of ID to be able to reject claims of design. There is no way to use the necessary properties of intelligent design to reject a claim that something was intelligently designed if in fact it was not the product of design. You could claim that absolutely every single thing is intelligently designed, and there is no way to apply intelligent design to categorically filter out things that were not. This means that in both a logical but also a statistical sense, it is impossible to then build up any evidence in favour of intelligent design, because if your explanation is compatible with all claims, it predicts none.
I've said it till the cows come home that false negatives are not a problem for ID -- if design detection methods fail to confirm design, the presumption is non-design (unless one's metaphysics, such as Aristotle's, suggests everything has purpose -- but even here a Thomist could distinguish between primary and secondary causation, with nature operating by secondary causes but unable to bridge certain gaps -- Thomas would have argued that the resurrection was beyond the capacity of nature). In any case, the naturalist faces the same problem that anything might be designed but we simply don't have evidence for it. A coin lands heads. Was it intentional that it should land heads? Barring direct evidence of an intelligence acting thus, it might still be the result of intelligence (for both the naturalist and the non-naturalist). What makes ID scientific is that we can ascertain that certain events give clear marks of intelligence -- it's irrelevant that other things lack such marks but still may be designed. To your point about mechanism, there's plenty in science that is not explained in terms of mechanism. As for a mechanism for applying the ID logic, the better term would be a "methodology," and there is a well-developed methodology for identifying the effects of intelligence -- see, for instance, chapter 6 of the second edition of THE DESIGN INFERENCE. Your criticisms to me sound muddled and it doesn't seem that you have carefully read our literature.
It has been a decade since I was learning or teaching this stuff, so I've worked with claude to nail it down. We go from the beginning, feel free to skip directly to "The Unfalsifiable Case". In this document, you can read A as design, and B as the appropriate scientific model, for example, evolution in the case of biological structure.
BAYESIAN ANALYSIS OF UNFALSIFIABLE EXPLANATIONS
Setup: We have data X and two competing models A and B.
Definitions:
• P(A), P(B) = prior probabilities (initial credence in each model before observing data)
• P(X|A), P(X|B) = likelihoods (probability of observing data X given each model is true)
• P(A|X), P(B|X) = posterior probabilities (updated credence in each model after observing X)
Derivation of Bayes' Theorem:
The fundamental probability relation states that the joint probability is order-independent:
P(A ∩ X) = P(A|X)·P(X) = P(X|A)·P(A)
Rearranging:
P(A|X) = P(X|A)·P(A) / P(X)
Similarly:
P(B|X) = P(X|B)·P(B) / P(X)
Taking the ratio eliminates P(X):
P(A|X) / P(B|X) = [P(X|A) / P(X|B)] · [P(A) / P(B)]
The posterior odds equal the likelihood ratio times the prior odds.
The Unfalsifiable Case:
Consider model A that is unfalsifiable—it assigns equal probability to all possible observations. If there are N mutually exclusive possible datasets, then:
P(X₁|A) = P(X₂|A) = ... = P(Xₙ|A) = 1/N
This is a uniform distribution over the data space.
From Bayes' theorem:
P(A|Xᵢ) = P(Xᵢ|A)·P(A) / P(Xᵢ)
where P(Xᵢ) = P(Xᵢ|A)·P(A) + P(Xᵢ|B)·P(B)
Substituting P(Xᵢ|A) = 1/N:
P(A|Xᵢ) = (1/N)·P(A) / [(1/N)·P(A) + P(Xᵢ|B)·P(B)]
Comparing the relative support for A across different possible datasets:
P(A|X₁) / P(A|X₂) = [(1/N)·P(A) + P(X₁|B)·P(B)] / [(1/N)·P(A) + P(X₂|B)·P(B)]
As N becomes large, (1/N) → 0, so:
P(A|X₁) / P(A|X₂) → P(X₂|B) / P(X₁|B)
The relative posterior support for A between different datasets depends only on model B's predictions, not on A's. Model A contributes no discriminatory power.
Additionally, as N → ∞ (continuous limit), the likelihood P(X|A) → 0, and A suffers a Bayesian penalty for spreading its probability mass uniformly across all possibilities—lack of precision is penalized.
Conclusion:
An unfalsifiable explanation that assigns equal probability to all possible datasets cannot gain differential evidential support. While the absolute posterior P(A|X) may vary depending on what competing model B predicts, the ratio of posteriors across different datasets is determined entirely by B's discriminatory power, not A's.
Put simply: a model that "explains everything" equally makes no predictions. It cannot accumulate evidence in its favor relative to competing explanations because it fails to discriminate between different possible observations. Evidence requires differential prediction.
False negatives are no problem for ID if you are content for it to not be eligible for inclusion within science, and remain purely a faith-based religious belief, like the Resurrection of Christ. If that is your intent, then that is fine. But if you are determined that ID garner scientific evidence and be incorporated into our scientific knowledge then it is a show-stopper issue.
"A coin lands heads. Was it intentional that it should land heads?"
This is not a scientific question. Science doesn't deal with intent. It deals with what happens and how it happens. I can intend for a coin to land heads from here until next week, but the scientific treatment of the physics of coins falling is unaffected by my intent.
"What makes ID scientific is that we can ascertain that certain events give clear marks of intelligence -- it's irrelevant that other things lack such marks but still may be designed."
This is a perfectly good rhetorical argument, but it is useless as a scientific one. You can't scientifically compare the evidence for or against competing explanations if one of them lacks a falsification criteria, or in the case of a more Bayesian statistical treatment of evidence, an informative probability of observations conditioned upon the explanation. If nothing is inconsistent with an explanation, then nothing is evidence for it.
Have you held the second edition of The Design Inference in your hands? Are you aware of how we develop Bayesian statistics in favor of the design inference? Where would you be without Claude?
I have not read the 2nd edition, only summaries and overviews and spin-off blogs. I'll see if I can track down a copy.
But I don't see how you can, even in principle, construct a bayesian model that develops evidence in favour of intelligent design over an alternative explanation when you lack an informative measure of the likelihood term for the data conditioned on design. I have seen the various ways to estimate some probability of the data conditioned on a non-design process, and then claiming this falls below some threshold so can be rejected, but this is a really fundamental violation of the valid way of applying statistics for choosing between alternatives. You have to look at the relative odds of your explanations, not probabilities for a given explanation. And so we get back to there being no way to get at the odds for the data conditioned on the design explanation.
Without Claude, I would have written essentially the same text/maths, but it would have taken me half a day to remember it all, and double-check it for typos, and get someone else to verify I had not messed up in some embarrassing way. In either event, we reach the same end-point, that when comparing explanations, you can't gain evidence for or confidence in explanations that explain everything and rules out nothing.
I think that we must develop new arguments for the viewpoint that Intelligent Design is indispensable to the causal ontology of our best scientific theories.
The difficulty here is that many of our best modern scientific theories aren't causal. They are predictive, in that they tell you if you observe this you will observe that, but for various definitions of "cause", they lack that. Or they have it only as a narrative convenience, or as a proxy for the intent of the experimentalist. This was clear in 1913 when Russell wrote "On the Notion of Cause", and is ever more so today with our massive advances in our understanding of mathematics and computation.