7 Comments

This is a fabulous article. I’m not sure who writes better prose, Bill or ChatGPT?

A huge insight for me is Bill’s discussion at the end about Wikipedia. Personally, I’m not sure if I will ever use it again, and if others think likewise, it may die a sudden death, being replaced by LLMs that are a lot more honest (even if they are not perfect). If Wikipedia is bypassed and replaced with LLMs, the truth about ID will become more visible and may gain more traction and adherents and scientists.

Are ID books allowed to be fed into the LLM models? This seems to me to be an important question. It would be a shame if LLMs could only build models based upon web articles rather than reputable published manuscripts.

How do “no free lunch” theorems fit into this discussion? It seems to me that the possibility of getting significant information from thermodynamic processes is a fundamental axiom for Methodological Naturalism. Ultimately, particles bumping into other particles is required by MN to do the heavy lifting. Does ChatGPT provide any assistance in this regard?

Expand full comment

Interesting article. I'm glad the author was willing to "wrestle" with the AI rather than treat it like an oracle.

Here's an article about the dangers of treating AI like an oracle.

https://www.discoursemagazine.com/p/the-new-oracles-of-generative-ai

From the article:

"While much of the bemoaning of AI within higher education has focused on students’ cheating, the much more serious threat to open inquiry and truth-seeking is what we’ve described through religious analogy—an omniscient “I” that makes pronouncements that conceal its human, fallible origins; a reality in which such oracles can be selected on the basis of the user’s value system, rendering truth completely subjective; answers in which didacticism supplants objectivity; and a desire for divine conclusiveness and lack of ambiguity in the face of uncertainty."

Expand full comment

A “conversation” I had with Chat GPT 3 ended with it telling me that biological evolution was a “necessary truth”, like 2 and 2 equaling 4. Maybe 4o is better. I do hope that it will disregard the stranglehold on ideas and history that Big Tech does seem to protect.

Expand full comment

May I share two examples showing ChatGPT’s subtle errors that smart everyday people would likely miss?

“Falsifiable” is a term used in science to describe a hypothesis or theory that can be proven false by empirical observation or experimentation. In other words, a falsifiable statement is one that can be tested and potentially shown to be incorrect or untrue through observation or experimentation.

According to ChatGPT in Dr. Dembski’s interview:

(begin quote)

Falsifiability: SETI research is inherently falsifiable. If no artificial signals are detected after extensive searching, this would suggest either that extraterrestrial intelligence does not exist within the observable universe, or that it is not using detectable forms of communication. This does not prove non-existence but provides a framework for understanding the limits of detection.

(end quote)

The definition of “falsifiability” calls for an experiment or observation that disproves the contention or theory. The mere lack of confirming evidence for a contention or theory does not disprove it. The lack of confirming evidence after many attempts to find the evidence does weaken the hypothesis but does not “disprove” it. ChatGPT’s assertion is thus an argument, not a fact.

Also from the ChatGPT interview concerning whether SETI could supply “reproducible” data and results:

(begin quote)

Reproducibility: Any [extraterrestrial] signal detected can be independently verified by other observatories, making the findings reproducible and testable by other scientists.

(end quote)

ChatGPT again grossly misleads the reader. That there are several observers at different locations witnessing the same event is not an example of “reproducing” the phenomenon. It is an example of multiple witnesses, nothing more. Reproducibility refers to being able to see the cause and effect sequence multiple different times, usually in the context of a lab where the experimenter sets up the causes and detects the expected (same) results.

A smart everyday person might not catch the sleight of hand here (and elsewhere as Dr. Dembski points out). In this way, ChatGPT can mislead readers endlessly.

Expand full comment

Quite an excellent experiment with ChatGPT. Thank you for sharing it!

Showing how ChatGPT engages in special pleading and parrots the rather shopworn anti-ID cliches effectively reveals the kinds of biases built into the LLM. Watching ChatGPT backtrack when challenged is enlightening – ChatGPT makes assertions that itself will disavow when pressed.

Aye, the rub. We explicitly are treating ChatGPT as an intelligent researcher and thinker. That fact worries me most. ChatGPT’s responses to questions were highly competent, both in substance and in English composition. As a young teenage fascinated by science and tech, I would read ChatGPT’s discussion and consider it factual and authoritative. I wouldn’t ask questions as Dr. Dembski does, i.e., questions coming from a better educated foundation and ability to express them.

People talk about the threats AI poses to humanity. There are many. Among the worst is that everyday human beings already assume that AI and LLM systems, like ChatGPT, Bing’s Copilot and others, do the research, analyze and organize the data, and present “the truth.” Assuming AI tells the truth will be the younger generation’s belief starting from the cradle.

Readers have responded to my articles highlighting the danger of believing AI systems “because they said so” by asking “so, what do we do about it – how do we know what is true?”

Despite the current ideology that says there is no such thing as objective truth, the opportunity is ripe for intellectual and technological tools to be developed so that AI-generated information is not accepted unquestionably and is instead challenged or contrasted or whatever is necessary to help people discern truth.

Expand full comment

"AI, by loosening Big Tech’s stranglehold on our cognitive real estate, promises to level the playing field for ID, giving it a fair chance to succeed in the marketplace of ideas." I hope you're right Bill. I'm still reluctant to use the many AI 'search' engines that mysteriously appear on my various apps etc. I guess this will change over time for me, as I clearly remember saying to myself not that many years ago now, that I would never switch from my primitive mobile/cell to a smart phone...but of course I succumbed...

Expand full comment

Hi Jeff. I was aware as I was writing my concluding comment to this post that I was perhaps being unduly optimistic. But it seems to me that unlike Wikipedia, whose first mover advantage and collusion with Google give it monolithic control of anything that's encyclopedic on the Internet, LLMs are going to be developed by multiple interests and will be competing against each other. Weaknesses, and above all deficiencies in truth telling, will, I expect, count against LLMs, so that people vote with their feet against some and for others. For instance, I'm looking forward to Elon Musk's replacement of OpenAI. I'm therefore guardedly optimistic that LLMs will increasingly give us more truth than Wikipedia and Google.

Expand full comment