Saturday, September 17, 2011

The Juggling of Certainty vs. Science

Click to Enlarge
One obvious problem which has been a repeated barrier to both correction and progress in the field of Textual Criticism of the NT has been a basic ideological and fundamental conflict, not just between parties, but influencing individuals attempting to practice TC.

The conflict is this:

Fundamentally, the Scientific Method is tentative and agnostic.  In order to remain truly scientific, it must deal in probabilities, and ever hold the door open to new discoveries which can not only modify current ideas, but completely overthrow them.  Diagrammed as above, one can see that it forms an Endless Loop, without ever establishing any permanent, universal truth. 

Those engaged in Textual Criticism on the other hand, while desperately desiring to garner the support and also credibility of scientific method, nonetheless cling to ideas which at base are in fundamental opposition to science.  First, is the idea of a fundamental Objective Reality, a non-changing universal truth applicable to every situation, and second is the idea that science 'progresses' inevitably toward greater and greater accuracy and surety in regard to believed facts.

Neither of these ideas is really a part of Scientific Method, or a necessary ingredient of Scientific philosophy, even though both ideas have been around as long as science, and have been more often than not inextricably bound up with scientific investigation.

The growth of science in the 19th century, also saw advancing alongside it the field of mathematics.  In this field, especially the concepts of Convergence, developing from Calculus, led men to believe that almost any problem could be solved by honing and improving the appropriate method of approximation, which would naturally and result in more and more accurate statements about the world.

The New Testament Text was regarded no differently: It was believed to be only a matter of time before textual-critical methods would tighten up and produce a more and more accurate 'original text', finally as sharp and accurate as a photograph, or a scientific measurement of light-speed to 8 decimal places.



Eureka! - Hort's Innovation

Surprisingly, F.J.A. Hort was instrumental in forwarding this ideology.  Contrary to current historians and various opponents, Hort's real innovation in Textual Criticism was not "the genealogical method", or the advancement in the evaluation of various sources.  It was the innovation of what is now called in modern mathematics and computing as "iteration".

Iteration is the application of a set of instructions, a 'program' or algorithm,
 repeatedly, usually to refine or home in on a result.  Imagine for instance, a lathe that shapes table-legs.  It shapes the wood by repeated cutting away of waste, leaving the desired pattern behind. 

An Algorithm is usually fixed, but sometimes having optional paths or choices built in.  The flexibility comes through a testing, measurement or decision process (as in the flowchart above, where the 'diamond' shapes mark points in the flow where choices will be made).  

Some objects in mathematics are better and more efficiently expressed as algorithms - a group of ordered steps or instructions, meant to be applied like a recipe or prescription, and often actually acting as a description of a process or phenomenon.  Other objects can ONLY be described by algorithms.  Unfortunately, some objects cannot be expressed by algorithms at all.

When mathematicians began to notice algorithms, they discovered other sometimes disturbing properties of said 'objects', such as the fact that some mathematical objects and ideas have no algorithm at all.  (the calculation of PI or the search for Prime Numbers are examples of things that must be calculated by 'brute force' and crude testing rather than elegant formulas).

When mathematicians noticed that some problems and ideas cannot be expressed by algorithms, it became clear that some problems were by their very nature "unsolvable".

On a simpler level, it was clear that some  'formulas' simply did not and could not 'converge'; that is could not settle down and spit out one single numerical answer.   Likewise, algorithms simply did not always produce a useful or reliable result, nor could they even come to an end.  They were like run-away processes, and if left to themselves would get stuck in endless loops, or randomly wander the universe of numbers.

Hort's assumption was that by using the novel idea of "iteration", meaning the repeated application of textual-critical principles and techniques, to further and further refine the content and certainty of the text, one could arrive as close as possible to the original text as the extant data and the scientific process allowed.

Unfortunately, Hort was wrong on this entire idea:

(1)  There was nothing in the realm of science that indicated that discovering the 'original text' was even possible let alone probable.

(2)  There was nothing that suggested that text-critical methods could or should converge toward any fixed text, let alone the true original text.

(3)  Iteration itself had no magical power to force the textual variants to converge into a 'near certain' text, in spite of its allure and mathematical usefulness in certain situations.  If the applied method was flawed, or ill-defined, the opposite result was inevitable.

(4)  The success of iterative methods in other areas of science had no bearing on iteration as an intelligent or useful technique in textual criticism.   In order for iteratiion to work, the techniques to be iterated must first be sound.

Later, when men of religion attempted to apply mathematical and scientific concepts and techniques to the problem, they were inevitably biased and their work tainted by their own conviction that these methods would converge to an absolutely certain 'original text', and that this was the way God intended us to acquire this certainly established, authoritative, original text.

Nobody thought to inquire and investigate thoroughly what methods that God Himself chose to preserve and deliver the text, and what this meant for the credibility of textual criticism of the NT as a historical science.

As it turned out, God did not use the historical-critical techniques of NT Textual Criticism to preserve and supply the NT text.  God chose simpler, and quite apparently, more reliable methods than those proposed and used by modern Textual Critics to 'reconstruct' the NT text.

These facts strongly suggest that those who wish to establish, secure, or restore the NT text ought to imitate the methods used historically by God Himself for the last 2000 years.

Nazaroo

2 comments:

Jim714 said...

Thanks for this post. I hadn't thought of mathematical iteration as an underlying belief for modern NTTC. I think there are additional reasons for denying the 'scientific' descriptor. First, NTTC theories are not falsifiable, borrowing from Karl Popper. That is to say I am not aware of any test which would determine whether UBS 24 or UBS 27 was closer to the original, let alone UBSX vs. MT. How would anyone decide?

Second, I believe that NTTC theories are testable by applying their methods to cases where we have varying editions of a work, but we also have the autographs. One of my favorite examples for this kind of testing is the poetry of Emily Dickinson; the earliest edition is corrupt, the post-1955 edition is authentic. But if one applied the methods of NTTC to these variants, would they generate the autograph? I doubt it; I think it extremely unlikely. Since the methods in a case where we can actually compare to the autograph, why should we grant NTTC methods efficaciousness?

Third, given a particular X, it does not follow that only one condition, or set of conditions, will give rise to X. If there are equally plausible conditions for X then it is inherently problematic to side for a particular set of conditions as the actual ones that gave rise to X. I believe that in most cases NTTC is in this situation; there is simply no way to scientifically determine which set of conditions gave rise to a particular variant.

Best wishes,

Jim

Nazaroo said...

Thanks for your excellent observations and comments, Jim.

I think some of what TC claims could be testable, if we care to apply the rigor necessary, and are willing to be satisfied with scientific probability estimates, rather than 'black n white'.

Multiple causes with the same results are not just a theoretical possibility, but a rampant problem in the copying process.

I invite you to look at excellent examples of this all-too-common problem here:

Multiple Causes of homoeoteleuton

peace
Nazaroo