IGS Discussion Forums: Learning GS Topics: What "is" general semantics?: Off-Topic Posts (Sub-Discussion)
Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, August 13, 2008 - 08:32 am Link to this messageView profile or send e-mail

Milton, you failed to follow the guidelines requested.

I've asked the moderator to move your response into a sub-discussion entitled "Arguments against and discussion of proposed definitions". And that goes for anyone else who does not want to participate in the collection of positive assertions as to what general semantics "is" (among other things).

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, August 13, 2008 - 10:38 pm Link to this messageView profile or send e-mail
Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, August 14, 2008 - 10:52 pm Link to this messageView profile or send e-mail

Thomas,

You apparently did not grasp the sense of my last sentence: "Negative" assertions can not be included under the mistated and misused sound bite.

That means that you can not include a "not a" form of statement under the "not" in the sound byte. My logic illustrated that perfectly.



The sound bite has two parts.
Whatever you say it is, it isn't.
Expands to the simple assertion, "It isn't what you say it is", or

IT is not what you SAY IT is.

This is an indirect quote that "means":
Any TERRITORY is not any verbal map of that territory.

Indirect quotes are generally taken as transparent whereas direct quotes are generally taken as opague.

John said Mary was sick is taken in English as different from John said "Mary was sick"; in the former, the indirect quote, 'Mary was sick' is generally understood as known by the speaker and possibly his or her own abstraction; whereas in the later, the direct quote, the speaker is taken to know only the exact words spoken by John.

As far as the applicability of logic to life, well, that is precisely what science is, and it is why science is so phenomenenally successful in building "hard" science models. General semantics, best illustrated by Popper's philosophy of science and the method of falsification, exactly uses modus tolens, direct logic, to prove theories wrong based on failed predictions. Your claim, that it is a mistake to apply logic to "real life", is directly contrary to the main principle of general semantics: "sane" reasoning - namely to apply the methods of science and mathematic to our daily lives.

Your suggestion that it is a mistake to apply logic to "real life" shows that you have not understood what general semantics is mainly about. Korzybsk's aim was for everyone to learn and use the methods of mathematics and science and to apply them in our daily lives, and he called this the only "sane" form of reasoning. All else is "un-sane" or "insane".


quote:

It is sad indeed to deal with even young scientists in the colloidal and quantum fields who, after taking off their aprons in the laboratory, relapse immediately into the two-valued, prevalent aristotelian orientations, thus ceasing to be scientific 1941.


S&S Introduction to the Second Edition, p. lix

Let's not have any more logically and mathematically naive claims that it's a "mistake" to apply the methods of science and mathematics (valid logic) to "real life".

Michel Foucault said "It is in vain that we say what we see; what we see never resides in what we say". This Is Not a Pipe (Berkeley: University of California Press, 1983), p 9, cited in Art and Physics by Leonard Shlain, Harper, 2007 (originally 1991)

All our physical theory statement fall technically under the "unknown" class in ternary logic. Deft application of testing, logic, and observation have allowed us to show that some of the past theory statements fall into the "false" class of ternary (or four-valued) logic. But we have no known way to show any of these theory statements to be in the "true" class, because we have none of them starting in the true class. Any hypothetical science theory statement starts in the unknown class.

We can proceed to a four valued logic system which includes "true, corroborated, unknown, false, arranged in strength order.

A AND B yields the weaker of A and B.
A OR B yields the stronger of A and B.

Not true = False
Not corroborated = unknown
Not unknown = corroborated
Not False = True.

(Science excludes untestable hypotheses, so we don't need to consider those - thus eliminating all the questions of belief systems.)

General semantic "is" using "sane" reasoning, that is, eliminating fallacious reasoning by applying the methods laboratory science, mathematics, valid logic, and empirical testing, in our daily lives.

Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, August 14, 2008 - 11:05 pm Link to this messageView profile or send e-mail

Milton, you failed to follow the guidelines requested, and you re-posted a duplicate of your post that was already moved to the sub-discussion.

First, your personal definition must begin with "general semantics 'is'" with the "is" in quotes, and be a positive assertion.

Second, your justification must be brief.

The guidelines request that you NOT deny or argue with anyone else's "definition", which is exactly what you did do.

Consequently your post was unresponsive, argumentative, and does not contribute to a collection of facets, views, abstractions that are to be included in a comprehensive list of positive responses to the question that every inititiate or lay person asks.

If you want to be a positive contributor to this thread, provide your positive assertion 'general semantics "is"' (note the quotes), and feel free to add "among other things" to each of your uses of the quoted word "is".

But prioritize your list so as to indicate your most important positive (assertion format) abstraction first.

If you have more to say, take it to the sub-discussion.


Moderator, please delete Milton's duplicate post, and you may delete this response at the same time, or you may move both to the sub-discussion.
Thanks,

Author: Ralph E. Kenyon, Jr. (diogenes) Friday, August 15, 2008 - 08:19 am Link to this messageView profile or send e-mail

Some alleged general semanticists apparently do not understand that the extensional device, specifically quotations, may be used with the word 'is' (among other words) so as to explicitly indicate the requirement for careful abstraction when reading the text containing such extensional-device-marked words. Those who rail against such use exhibit a "signal reaction" to the word 'is' itself without any regard for the overall message offered as constrained by its context.

If we wish to not turn away prospective students of general semantics, then we need a few quick and good one sentence answers to the question they ask, "what is general semantics?", and we need to be able to choose from among many such abstractions something that fits the immediate perceived need of the potential student. The "general semantics is not what anyone says it is" answer communicates a number of very negative reactions to potential students, losing many of them immediately.

The impressions that get communicated include.

1. They can't answer the question, because they don't know what it is, in spite of the gobbldy-gook noise they spout in response.
2. It's one of those pseudo-science mystical systems with incoherent rhetoric.
3. It's a verbal cult.
4. It's plainly inconsistent.
5. Etc.
Etc.

Membership has been shrinking.
"Generational" abstraction over the decades have resulted in fewer courses including fewer aspects of general semantics being taught - significant informational entropy losses. As a result proponents spend more time arguing about what Korzybski "meant" and reviewing his words - like ("other") religions or cults do. Much modern hard science and mathematics has never been incorporated into the regular teachings. When was the last time anyone explained relativity using the basic mathematics of algebra at any general semantics seminar? Has anyone done such since Stuart Mayper died (stopped teaching)? Is there anyone left who can explain Tarski's model theory? How many alleged general semanticists go glassey-eyed at the mere mention of logic and mathematics - the virtues of which Korzybski continually extolled? Does anyone even remember that so-called "Aristotelian" logic (Not those stupid misquoted three so-called "laws of thought") are not only subsumed under multi-valued logics, probability, an inferential statistical techniques (parochially called "non-Aristotelian methods"), but is the core of consistency that enables all the multi-valued methods to function?

Our need is for an entry "toolbox" that contains about a half-dozen, possibly more, well thought out, high level abstraction maps of general semantics stated in the from of a classic "is" of identity definition that may include a non-allness qualifier.

Example
General seemantics "is" (principally) ...

General semantics "is", for the present purposes, ...

It will be the responsibility of the "expert" to use Don Kerr's approach and become familiar with the prospective student, news person, etc., sufficiently to choose the most effective tool at the time of answering the question in
1) the grammar that the listener can understand.
2) with a "definition" that use words to evoke the desired response in the listener - his or her "known" experiences most likely to give them an answer they can understand from our inferred model of their perspective.

(Many, most?) People "go away" from general semantics when given only negative answers.

The concept of modern science - that we experience only an abstract model, even at object levels, directly contradicts "lay" realism; it cannot be directly understood from the realism perspective, and that means that the negative assertion - the map is not the territory - "what anyone says general semantics is is not general semantics" has no immediate content validity from the perspective of "lay" realism. As soon as you start talking map not territory, word not thing, you show an obvious "truism" but you try to apply a two-dimensional (two-level) point of view to a one-dimensional perspective (we see what's out there) held unconsciously, and therefore, as a signal reaction, by many (most?) prospective initiates.

General semantics "is" the system of applying modern scientific knowledge and methods (including valid logic and mathematics) advocated by Korzybski as a way to improve the efficiency and effectiveness of communication and understanding in our every day lives as well as of active time-binding (the transmission of "knowledege" from generation to generation) and by doing so with conscious awareness of the process as understood.

Do you/we want to successfully engage more potental general semanticists? Then answer their initial questions in a format and manner that they can understand, one that shows some relevance to their immediate needs and beliefs. There "are" (Milton, that was an "is" of existence" for you.) certainly enough different "general semantics 'is' ..." formulations lying around. Lets stop proscribing their use because of "signal" reactions to the word 'is' by "identifying" it as "identifying", especially when we find the word properly "extensionalized" with the extensional device, quotation marks.

Find out something about a prospective initiate's experiences.
Choose words that evoke those experiences in a way that can allow entry to the listener's world view a basic, but significant to both general semantics and the prospective initiate.

Author: Ralph E. Kenyon, Jr. (diogenes) Friday, August 15, 2008 - 11:17 am Link to this messageView profile or send e-mail

Hello Nora,

Thank you for your emotional response, I would label your domineering and pedantic "challenge" childish, egotistical and counterproductive. , and the conclusions you get from starting with said emotional judgement as premises.

Feel free to start your own thead asking for responses that do not match the cognitive paradigm of the question asked. As they say on "What Do you Know" (NPR) "If you are a stickler ..., get your own show."

The approach I am taking takes advantage of adaptive resonance, as described by recent brain research, to facilitate the communications connection. It does not change from a round peg to a square peg to fill the round hole.

Your logical connective begins with a false hypothesis, If I were just starting out, ..., and anyone with even a modicum of logic knows that a conditional with a false hypothesis has a truth value of true, that is, as the saying goes, "starting with false proves [sic] anything" (both true and false).

Author: Ralph E. Kenyon, Jr. (diogenes) Friday, August 15, 2008 - 10:48 pm Link to this messageView profile or send e-mail

Loel Schuler, writes Since no one has mentioned it, I wonder if you have referred to the July 2007 issue of ETC:? Lots of nice general semantics "is" statements with attribultions.

Thank you Loel, I will be looking at that list.

In ETC: A Review of General Semantics, Vol. 64, No. 3, July 2007, p. 190, an un-named editor writes:


quote:

"Bob [Wanderer] demonstrated the best time-binding with ["General Semantics: a
Compendium of Definitions"], drawing on many well-known general semantics writers and teachers,
as well as some less familiar names, and adding a bit of his own substantial
understanding.


Unfortunately, the characterization of the list as representing "the best time-binding" misses the mark. Wanderer presents only quotations and author names. In no case does he provide the full reference citation; the where and when indices are missing. A "best" time-binding would not only include the author's name, but the date the quote was first recorded as well as the publication or media in which it was recorded or uttered. To bring the Wanderer article up to the "best" standards requires that each item be researched, and its original use properly indexed as to where and when. Without the complete space-time-name (where, when, who) the claimed formulation cannot be a fully authenticated time-binding record item. Attributing a quote to a named person is a start, but we need more extensional authentication.

Anyone who can provide a place and date for any of the items in the referenced list, please do so.

Author: Ralph E. Kenyon, Jr. (diogenes) Friday, August 15, 2008 - 10:55 pm Link to this messageView profile or send e-mail

Thomas wrote

"'General Semantics' is two words."

In doing so, he change the subject from the concept by intuition to the words that name that concept. He also neglected to use the extensional device around the word 'is', thus writing a sentence in metamathematic language that did not address the request. It is a non-responsive jest, and as such off-topic.

Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, August 16, 2008 - 08:30 am Link to this messageView profile or send e-mail

"nonsense" is not one of the possible truth values in three-valued or four-valued logic.

Virtually everyone has allowed themselves to succumb to the "IF I had only done X THEN the result would be Y", and this is frequently followed a sequence of "If [Y] happened then [Z] would have happened", each time picking a positive result until a chain of hypothetical abstractions arrives at a highly desired non-reality. [Sorry, but I don't have Quine quotes for variables embedded in propositional functions. Square brackets will have to do.]

Because the truth value of the conditional is "true" (based on a false hypothesis), the "truth value" of the hypothesis is passed through the entire change to the conclusion - namely that it is false. (In multi-valued logic, the value might be unknown.) My caution is that any conclusion is supported, both positive and negative. Consequently I do not fill my cranium with any such sequences beginning a conditional with a false hypothesis. I point out that a negative or a neutral conclusion could just as well have resulted, such as the person might have been struck by lightening if they had done something different. Both chains of reasoning are valid, but the truth value of the outcome is false in binary logic and false or possibly unknown in multi-valued logics when the hypothesis is false.

I illustrate my point with my version of the old adage, to wit: "If wishes were horses, we would all be armpit deep in horse manure."

To think and behave as I do in this regard is an example of avoiding using a fallacy in reasoning. IF we have shown that a proposition is false, then we do not use it as a starting point in any reasoning. In this case, the hypothesis is in the catetgory of a "scientific" observation statement - a fact about the past. Building any plan - a sequence of connected actions beginning with starting conditions - on a known to be false starting point is definitnely "un-sane" reasoning.

The examined "IF ... THEN ..." sequence, translated from the past to the future, makes the hypothesis conditions "unsatisfied" (not yet observed) and therefore having a truth value of unknown, and, under these conditions, the truth value of each conditional itself comes into play. Many of the conditionals many people come up with have little or no basis in corroborated experience. Moreover each is subject to the X1, X2, ..., XN does not entail XN+1 property of even strongly corroborated theories, not to mention the notion that Xi could be "the same" as Xj is considered false in general semantics.

Starting a chain of reasoning beginning with a false hypothesis is a known fallacy. Beginning with a hypothesis with an unknown truth value is a potentially testable "theory", and when the propositions involved describe actions that get taken, constitue a plan. Plans can go awry when the conditionals are not carefully thought out and chosen from only strongly corroborated theories, such as an inexperienced person trying to build a house. To make a plan with a starting point in the past is "un-sane". So is beginning reasoning with a known false hypothesis.

As a matter of fact, the beginning of non-Euclidean geometries took the fith postulate, which had been assumed to be true, and changed it to a contradictory form, and then tried to reason to something known to be false. The attempt failed, and the result was consistent but different geometries. In this case the actual truth value of the fifth postulate was unknown. Because of the nature of mathematics and logic, it has now been proved that this postulate is independent of the four postulates of abslotute geometry. In the case of maps of our "physical world" and the goings on there the best truth value we can have for empirical theories (conditionals) is strongly corroborated.

So, for Thomas, my sentence apparently did not evoke any "meaningful" experience to connect to his "senses" thus provoking his "senseless" (aka nonsense) comment.

Author: Ralph E. Kenyon, Jr. (diogenes) Monday, August 18, 2008 - 10:12 pm Link to this messageView profile or send e-mail

Thomas wrote, "Sigh....you can only show propositions are false in mathematics. This is not mathematics we are speaking here. I wish you could see that."

Every observation statement in empirical science is a "proposition" that maps an event that was observed and recorded. Every "prediction" made by a scientific theory is a "proposition" that maps a potential event capable of being observed or the contrary of a potential event capable of being observed. If the event is duly observed at the appointed time, then the observation of that event changes the category from predicted to observed and the status of the proposition from unknown to true. If the contrary of the event is duly observed at the appointed time, then the observation of that contrary event changes the category of the proposition from predicted to not observed ("observed to not happen") and the status of the proposition correspondingly from unknown to false.

According to the philosophy of science, particularly as taught in the past at general semantics seminars as detailed in Popper's writings, two valued logic connects the observation statements, propositions in the theory, and the observation of the results of tests show, using three valued logic, that some false propositions transfer the falsity back to the theory via the mechanism of modus tolens. Testable propositions, mapping the application of theory, can, and are shown to be false, and so the theories that predict these propositions, by modus tolens are also, as Popper puts it, "falsified".

As far as mathematics is concerned, propositions are only shown to be "false"(or "true") conditionally depending on the starting axioms or postulates, and then only provided valid logic is used.

In both cases, empirical science and mathematics, propositions evaluate to "true", "false", "indeterminate", "unprovable", etc, only conditionally on both the starting premises and the rules of inference used to connect one proposition to another.


Nora's latest... A strawman confusing "emotional" with "semantic". See Gilbert Ryle for an explanation of a "category mistake". Her map reveals a lot more about the map maker (herself) than it does her "target".

Author: Ralph E. Kenyon, Jr. (diogenes) Tuesday, August 19, 2008 - 09:54 am Link to this messageView profile or send e-mail

Thomas,

Stuart Mayper studied with Popper. The "as far as I know" just indicates that the speaker has not experienced the subject matter; it holds no value with respect to the validity of the content of the subject matter. To imply it does is to take the stance of "an expert" and to apply the fallacy "If I haven't heard of it, then it does not exist", a common ploy in un-sane reasoning applied by many so-called "experts". That you have issues with Popper's work does not mean that it is not essentially the current philosophy of science or that it is not what Korzybski advocates we adopt as sane reasoning. The "modern" in "modern applied open epistemology" ( which was carefully written for the articles of association of the institute, most likely with Korzybski's explicit approval) - oh, gee, is that the appeal to "authority" that you give no credit to? NO - it is in the time binding record. "Modern" means using the current methods with the current knowledge. If one is to apply "epistemology" - the theory of knowledge - one needs to become familiar with the historical methods and methods that have been shown to be invalid and work vigorously to avoid using any identified fallacies.

Newton's proposition about the addition of velocities is indeed false; as it has been replaced by the relativistic substitute. But that does not mean that a "false proposition", indexed over a range of variables, and extensionalized for specific instances can not produce a value that differs by an amount that we cannot measure. Even a stopped clock appears "correct" twice a day. If we were to be precise, we would call Neutons addition of velocities a propositional function which requires specific amounts, dates, and times, to produce an individual proposition, and an observer would abstract to that proposition, evaluating that it is "satisfied" (within the observer margin of error) [evaluated as "true"] or that it is not-satisfied (within the observer margin of error) [evaluated as "false"]. But the propositional function, expressed as a theory statement "IF you add velocities per Newton, THEN you will obtain such an such a value" fails to produce the predicted observation at high velocities, so, as a general proposition, it has been shown to be false.

Newton: w=u+v
Relativity: w=(u+v)/(1+uv/C2)

Newton's theory and his corresponding formulas have been "falsified" by, as Stuart Mayper used to say, "Popper's Chopper". However, Newton is still a practical method at velocities much lower than the speed of light. The difference has been measured for astronauts who vist the space station.

Tarski built the model theoretic definition of truth in a model "informing" the correspondence theory (mapping) in precise terms. (See Science and Sanity.) That is what is applied in both general semantics and in the philosophy of science (where general semantics ultimately comes from). A difference is that we general semantics, if I may skip the "alegged" and thus be accused of identification, are "commanded" to apply the methods of modern science and mathematics to our ordinary lives - to reason "sanely" as opposed to "un-sanely".

General semantics demonstrates that propositional functions are "false" by starting with a falsified observation and applying modus tolens to the theory statement that made the observation prediction. The technical result, of course, is that the entire system of which the theory statement is a part has been shown to be inconsistent, but the degree of corroboration for other theory statements is held as a measure to consider them not disconfirmed individually. As a result, the "new theory statement" added to a strongly corroborated theory package becomes the victim of Popper's chopper. Newton's entire system was "proven" to not model the observation statements, as it made predictions which did not agree with subsequent observations. Relativity predicted that light has mass, and is bent by a star's gravitational field, and the amount of that bend has been measured and found consistent with relativity, but contradicting Newton's predictions. Newton's formulas are still "strongly corroborated", but with a "new" indexing caveat - "at velocities much less than the speed of light.

Binary logic, however, does not have "almost", so the general proposition - without the velocity indexing - becomes false. "Oh yes, Newton is useful, for practical engineering at earthbound speeds and energies, but as a general principle? No, it is false" So, relativity is "true" then? No, but it is a more strongly corroborated model than Newton. Relativity has its competetors, all of which predict our observations to date, but which differ in ways we have not yet been able to measure. Relativity is "strongly corroborated", meaning that it has made many successful predictions and has not made a prediction that contradicts it. (But it still could be superceded by a "more advanced" theory, and that is why, were we reduced from four valued logic to three valued logic, we would have to call it still unknown. Our current philosophy, including general semantics, holds that we cannot know if a general proposition is true until all time and all possible occurrances have past, so they remain "unknown" or "corroborated in various degrees", or they can be shown to be false, but they cannot be shown to be true.

And just in case some of you think we can dispense with binary logic, think again. Each multi-valued logic depends upon binary logic as an embedded core. Just as non-Euclidean geometry is an extension of Euclidean geometry, as non-Newtonian physics (Relativity) is an extension of Newtonian physics, multi-valued logics (non-Aristotelian reasoning) extends two-valued logic (Predicate calculus - parochially known as "Aristotelian reasoning"). Binary logic is the core of all higher logics and is part of the meta-language in which they are described.

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, August 20, 2008 - 01:20 am Link to this messageView profile or send e-mail

Thomas, what you describe as "playing with words" illustrates the compreshensive structure I presented. You want a soundbite, not a comprehensive answer. A "soundbite answer" confuses and identify level of abstraction by presenting complexity as a simpler structure. The complexity involved in the multi-level model theoretic approach demanded by model-theoretic semantics and general semantics cannot be reduced as you suggest. Your "flippant" replies show you need to reduce complexity to simple, and therefor both too abstract and indequate.

You cannot dispense with the core binary logic that underlies and makes possibly higher valued logics. If you think you can, you do not understand logic or the nature of mathematics.

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, August 20, 2008 - 10:14 am Link to this messageView profile or send e-mail

If you don't want my answers, then don't invite them by replying.

Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, August 21, 2008 - 10:16 am Link to this messageView profile or send e-mail

Ad hominem does not address content.

Author: Ralph E. Kenyon, Jr. (diogenes) Monday, August 25, 2008 - 01:25 am Link to this messageView profile or send e-mail

Vilmart,
Your projections of my motives are incorrect.

A theory statement or proposition is false universally if it is false in a single instance. Only when you index it does it become a propositional variable. Newton's formulas are a good aproximation, but they do not work for all values of the variables, and that is what makes them false. There is a big difference between a proposition, which has no time or space indexing, and one that does have variable range indexing and tolerances. Newton's formula have no such qualifiers. And when we use them, we bring the restrictions and qualifiers.

Non-binary systems are described at the meta leval using binary logic. The all begin with definitions of terms and the application of those definitions, and the definitions are strictly yes or no. These may be extended with multivalued definitions, but they have binary logic as their core, in the meta level, without which they would be inconsistent or incoherent.

Author: Ralph E. Kenyon, Jr. (diogenes) Monday, August 25, 2008 - 09:07 am Link to this messageView profile or send e-mail

Vilmart, Again, your paraphrases attributing interpretation and motive to my formulations are incorrect.

I can respect your "I think" with respect to unquantified propositional logic statements; that applying "all" seems overly strict to you; however, that is the way that propositional logic works. P is expanded to "for all x P(x)", and a single instance of failure makes the expansion, and hence the unexpanded proposition, false. This is a high level of abstraction - the level where consistency can be validated. You seem to be talking about using a slightly lower level of abstraction, in which there is a wishy-washy fuzzyness in the certainty of an alegged proposition. At the level at which you seem to want to apply "reasoning" (I'll call it that to distinguish it strictly from logic), you seem to want to let a "formulation" (rather than call it a proposition) be called not false even though there cases when the formulation goes against experience, such as, for example, attempting to use Newton's formulas to predict the reactions of a body traveling at 90 percent of the speed of light. Newton does not work in this case, but it works passibly well to describe the path of an arrow I shoot into the air, well, almost, because it does not account for the wind, the rain, or the leaves it passes through on the way. Well the law of momentum seems to work ... hmmm, that doesn't work either, everything I throw slows down. Ok, how about gravity. Oops, that doesn't work either because everything I drop seems to go just so fast and no faster. Are you getting my drift? Newton's formulas don't cover even practical applications because they don't cover the whole picture. Friction, wind resistance, etc., these are all nearly impossible to calculate in any "real-world" situation. And just to be clear, neither does relativity. That is because we are using highly simplified maps - very abstract theories that eliminate consideraton of all kind of factors.

So, when you want to think of formulations as somehow "almost" true, your are not at the level of abstraction at which logic applies. Unfortunatly that level of discourse you ask for is not consistent, and in many cases, systems of formulations may even turn out to be incoherent.

To apply the tools of mathematics and logic - where consistency "lives" - we must go up another level of abstraction and translate the lower level statements into quantified propositional (and predicate) calculus, often using the unquantified shorthand.

At this level, Newton's formulas do not model the world, because there are observable events which do not satisfy the formula. As a model Newton's formulas are "false" in the model.

Recall that, per Tarski's initial work, as developed and abstracted, a "model" consists of tokens in a languge, a set of "objects", predicates on the tokens, relations on the "objects", and a mapping function between the tokens of the language and the "objects". (Note: I normally reserve the word 'object' to indicate an experience at the object level of the structural differential, but the word 'object' is technical in model theory, so I will use it here, but with quotes.) A set of formulas in the language is "true in the model" if there are relations on the "objects" such that the corresponding tokens satisfy the predicates in the formulas. (This is the formal definition which you may describe as the correspondence theory of truth, as the basis for "similarity of structure", or even the notion that "structure is the sole content of knowledge".)

When we apply this to the "real world" we are treating the "mapping function" as our concept by intuition "representation" and its inverse "reference". (When I "refer" to some-putative-"thing" with a name, the name is said to "represent" that some-putative-"thing". We also have the problem in "real life" that we don't have in Tarski's model or in other "pure" theoretic models, namely, that we don't "know" the "objects" (putative "things") our names ("tokens") refer to. We have no direct access to these "object" (putative "things") under the reference relations. So we have uncertainty in our reference / represntation fuction in that we do not know what or even if said putative referents exist.

When we apply model theory to what is going on, we must make sure that our language is abstracted to the level at which we can test for and insure consistencey, that is, that the language of the model, including any rules of inference for deriving other statements in the model language, is both consistent and coherent. Only binary logic can satisfy this criteria. When consistency is assured, we can build a meta-model that incorporates more truth values. But if we never abstract our language to this level of consistency, we have no assurance that the model can be consistent and coherent.

The "fact" that we do not know our model "objects" (putative "things") and that we do not know the reference / representation fuction embodies what Korzybski called the "general principle of uncertainty". Any statement we make, that we propose as a model statement, has the problem that we do not know what its components refer to or even if the names in the langugae have a corresponding "object" (putative "thing") in what is going on. This so-called "fact" of language reference in the context of science has been brought "rudely" to our attention on a number of times when our thought to be "the Truth" formulations suddenly proved "wrong" and a major paradigm shift occured - viewing what putative "things" the world seems to be made up of / how it is put together as suddenly changed (Kuhn).

You might say that humanity (personified) "woke up" and said, "I'll never be that absolute and confident again. Every time in the past that I was sure I knew what was, it turned out later that I was wrong. So, now I will hold myself ready for my current view to also be wrong - in other words, what I seem to know is generally uncertain."

To connect this to your words, I do not "want to restrict human thinking [to] formal logic"; but I do want to emphasize that without formal logic we can have no certainty about the reasoning from some statements to others. Only when we have this certainty - about the relationship between some statements and others - only then can we effectively test the reference / represtation connection. In model theory "truth" is defined in terms of this relation between the language and the "objects". It is a "semantic" definition of truth. We say that when an "object" "satisfies" a formula, that formula is "true" (in the model). In the case of "science" and the "real world", we don't know what the "objects" (putative "things"), or the relations among them, "are", so they remain "uncertain" even though we may have tested the statement many times. Then we subjectively reduce the probabily of error NOT in the sense that the statement "accurately" represents what is going on, but in the sense that we can rely on future tests continuing to to agree with past results.

But, if we do not abstract to the level of logic, we have a very "soft" approximation to a model with much more room for error, error because our so-called "knowledge" is not backed up by consistency in our formulations.

Vilmart,
In the case of your chalk incident, allowing for translation from another language, it sounds like the teachers vocalization only approximated describing - imprecisely - his or her predicament - being unable to write, and you did not see through the uncertainty in his or her language to the contextual need. Had he or she said "I can't write, or I can't write with that, or I need something to write with, you would not have had a formulation which had a structure contrary to your own observations. Also, you were not aware of his or her thoughts which he or she less rigidly encoded in language. It's a good example of how imprecision in choosing the words places more responsibility on the listener to apply error correction techniques and be aware of the general principle of uncertainty. Moreover, in a social context there are many reasons for uttering formulations, and not all of them are for "giving information" or are even conscious to the utterer.

Just to go back to your statement, "As regards the use of words in daily life or in other sciences than mathematics, I think it is too rigid.", you go against Korzybski when he urges the use logic and mathematics in and out of the laboratory; he laments the fact that many young scientists revert back to "un-sane" reasoning when they leave their laboratories. Korzybski is urging the use of strong methods not only in the laboratory sciences, but in our daily lives as well. That means using valid logic for consistency and inference, while remembering the general principle of uncertainty, as well as remaining conscious not only our own abstracting but of the many layer of abstraction we model in others and in media.

Hope this helps.

Author: Ralph E. Kenyon, Jr. (diogenes) Tuesday, August 26, 2008 - 07:53 pm Link to this messageView profile or send e-mail

Vilmart,
Well said, except for "any mathematical proposition applied to the real word is false, or ill-formulated.", which does not follow. When we formulate a proposition for use in modeling the "real world", we use intuition and other reasoning to create a reasonable-to-us hypothesis. That statement is hypothetically true or false, but, according to Popper, we can not prove either. The first truth value we have for the statement is "unknown". Now we have to use it to make predictions using only valid inferences. For example, using Newton's equations, we can construct a conditional that says If we drop something from a sufficient height, at T seconds it will be at Y position with V velocity. We can simplify this to IF P THEN Q. Now we perform the test, and we observe Q as predicted (within the margin of error of measurment) or we observe NOT Q as predicted (exceeding the margin of error of measurement), and we assign the truth value T to P and we assign T or F, depending on our observations. Then applying logic we get a truth value for IF P THEN Q. If we get false, THEN we assign the semantic correlation value "disconfirmed" to the theory from which we derived P, namely, in this case, Newton's law of gravity. If we get true, THEN we assign the semantic correlation walue "corroborated" to the theory from which we derived P, namely, again, Newton's law of gravity.

At the level of the logic P, IF P THEN Q, and Q, we use only truth values. When we bring in the relation that which we call reference/representation and the operational definitions, we have a semantic structure that relates what is happening to the logical propositions and conditionals. BUT, because we do NOT know what is going on, we have no way to ascertain the truth value for the conditional. We do have semantic truth values for the variables, but they depend upon conventional usage. Nobody can get access to "what is going" except through the mechanism of abstracting, and that does not yield what is going on; it only yields the perception (map).

"In that case we cannot use any more mathematics to represent physical events.

This does not follow. We have to apply mathematics at the logic level and remember that the same statement also has a semantic level interpretation, in which the "truth" value changes. At the level of theory consistency we have binary logic; at the level of semantic representation, "true" and "false" are forbidden to us; we must use "corroborated" and "disconfirmed". Both of these can vary over the probability range, and we may use Bayesian techniques on them to relate between the semantic and the logic levels. We "do the math" with logic, and we use more math to convert between binary truth values and probability values. See this thread for more information on Bayesian math.

We do have some statements at the semantic level that we call "true" or "false"; these are the observation statements, and are "true in the model" or "false in the model" because we observe that the statement is "satisfied" or its contrary is satisfied. Four truth value categories, 'true', 'corroborated', 'disconfirmed', 'false'.

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, August 27, 2008 - 10:11 pm Link to this messageView profile or send e-mail

True or false are logic values.
Corroborated and disconfirmed are semantic values.
We look at the formulations with a multi-ordinal eye in which both levels are included. The formulation is "proved" false at the logic level, but because the reference / representation function is unknown, we can not satisfy the conditions for "truth" and "falsity". When the physics theory is changed from light is instantaneout to light has a fixed limited speed, then relative to the change, in the theory, Newton's formulas are false. Semantically they are disconfirmed, but they are also inconsistent with the new overall theory, hence false at the logic level, false precisely because they do not predict values consistent with the relativity assumptions.

So true/corroborated and false/disconfirmed are not to be elementalistically split for formulations in the current map. What is true in the theory is not disconfirmed, and what is disconfirmed in the application is false in the theory. The problem is with what is known, and there we do not have truth (and false) for theories; only for observation statements. But they also have a semantic level of interpretation in which the abstraction from what is going to the formulation holds the principle of general uncertainty.

Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, August 28, 2008 - 01:28 am Link to this messageView profile or send e-mail

We "know" the standard model of physics is not correct, because it does not account for all the forces together.
Electro-weak + the strong force = GUTS.
Add gravity and get TOE.
I have not heard that any clear verified breaks with general relativity put it out of the running as far as its coverage area is concerned.

I would say that restricting our speed makes Newton "very useful", "practical", "convenient", but never "true" nor not "disconfirmed".

Remember, our maps are both semantic and logic entities, not just logical ones, so our criteria for their use must use the semantic abstraction, not the logic abstraction. The logic abstraction is for their internal consistency.