Philosophers have studied counterfactuals for centuries, but mathematical logic is concerned with the first sentence, a material implication. For propositional logic, we give an inductive definition for our collection of valid truth functions. Making sense of this involves stepping outside the system and giving an account of truth—more precisely, the conditions under which a propositional formula is true. Inductively, we can assume that $$A$$ is a logical consequence of $$\Gamma$$ and that $$B$$ is a logical consequence of $$\Delta$$. What would we have to say? A propositional formula is said to be provable if there is a formal proof of it in that system. It asserts something about how the world might change, if things were other than they actually are. The term “general proof theory” was coined by Prawitz.In general proof theory, “proofs are studied in their own rightin the hope of understanding their nature”, in contradistinctionto Hilbert-style “reductive proof theory”, which is the“attempt to analyze the proofs of mathematical theories with theintention of reducin… There’s a general philosophy in play here, something like “if I can compute it then it’s real.” We can rest a lot more comfortably if we know that our logic is computable in this sense; that our deductive reasoning can be reduced to a set of clear and precise rules that fully capture its structure. The Arithmetic Hierarchy and Computability, Epsilon-induction and the cumulative hierarchy, Nonstandard integers, rationals, and reals, Transfinite Nim: uncomputable games and games whose winner depends on the Continuum Hypothesis, A Coloring Problem Equivalent to the Continuum Hypothesis, The subtlety of Gödel’s second incompleteness theorem, Undecidability results on lambda diagrams, Fundamentals of Logic: Syntax, Semantics, and Proof, Updated Introduction to Mathematical Logic, Finiteness can’t be captured in a sound, complete, finitary proof system, Kolmogorov Complexity, Undecidability, and Incompleteness. The semantics and proof theory of the logic of bunched implications @inproceedings{Pym2002TheSA, title={The semantics and proof theory of the logic of bunched implications}, author={D. Pym}, booktitle={Applied logic series}, year={2002} } Here’s a brief summary of the main concepts I’ve discussed. Semantically, we read this sentence as saying “either $$A$$ is true, or $$\neg A$$ is true.” Since, in our semantic interpretation, $$\neg A$$ is true exactly when $$A$$ is false, the law of the excluded middle says that $$A$$ is either true or false. Propositional Denite Clauses: Semantics Semantics allows you to relate the symbols in the logic to the domain you’re trying to model. If $$A$$ was never used in the proof, the conclusion is simply weaker than it needs to be. A propositional formula is said to be a tautology, or valid, if it is true under any truth assignment. And honestly, I also just love this subject and am happy for any excuse to write about it more. Intuitively, a truth assignment describes a possible “state of the world.” Going back to the Malice and Alice puzzle, let’s suppose the following letters are shorthand for the statements: In the world described by the solution to the puzzle, the first and third statements are true, and the second is false. The inference rule is the famous modus ponens, which takes the form of a function that takes as input X and X → Y, and produces as output Y. Suppose we have a fixed deduction system in mind, such as natural deduction. Because of the way we have chosen our inference rules and defined the notion of a valuation, this intuition that the two notions should coincide holds true. The first sentence on this list is a lot like our “two heads” example, since both the hypothesis and the conclusion are false. Said more carefully, there is no first-order collection of sentences which has as a unique model the natural numbers. A logic satisfying this principle is called a two-valued logic or bivalent logic. The second sentence is an example of a counterfactual implication. Recap: SyntaxPDC: SemanticsUsing Logic to Model the WorldProofs Propositional De nite Clauses: Syntax De nition (atom) Anatomis a symbol starting with a lower case letter De nition (body) Abodyis an … Once we have a truth assignment $$v$$ to a set of propositional variables, we can extend it to a valuation function $$\bar v$$, which assigns a value of true or false to every propositional formula that depends only on these variables. A model selection puzzle: Why is BIC ≠ AIC? The Logic of Proofs (LP) was introduced by Sergei Artemov in [1, 2] and answered a long standing question about the intended provability semantics for the modal logic S4 and for intuitionistic propositional logic. But hold on, what are these “axioms” and “inference rules” I’m suddenly bringing up? This inference is validated in Lean: Similarly, if $$A$$ is false, we can prove $$A \to B$$ without any assumptions about $$B$$: Finally, if $$A$$ is true and $$B$$ is false, we can prove $$\neg (A \to B)$$: Now that we have defined the truth of any formula relative to a truth assignment, we can answer our first semantic question: given an assignment $$v$$ of truth values to the propositional variables occurring in some formula $$\varphi$$, how do we determine whether or not $$\varphi$$ is true? Together, these tell us that whenever the hypothesis is false, the conditional statement should be true. Understanding the rule for implication is trickier. And on the side of many inference rules and few (sometimes zero) axioms we have Gentzen-style natural deduction systems. 3/14. I sort of did this a week or so ago, with this post, which linked to the slides for a presentation I gave for a discussion group. I’ve said that the purpose of a proof system is to mimic the semantics of a logic. Axioms are simply strings that can be used in a proof at any time whatsoever. Is The Fundamental Postulate of Statistical Mechanics A Priori? I’ll stop there for now! It’s a set of characters, along with a function that defines the grammar of the language by taking in (finite, in most logics) strings of these characters and returning true or false. Then in natural deduction, we should be able to prove. On the side of many axioms and few inference rules we have Hilbert-style systems. Syntactically, we were able to ask and answer questions like the following: Given a set of hypotheses, $$\Gamma$$, and a formula, $$A$$, can we derive $$A$$ from $$\Gamma$$? $$\bar v(A \wedge B) = \mathbf{T}$$ if $$\bar v(A)$$ and $$\bar v(B)$$ are both $$\mathbf{T}$$, and $$\mathbf{F}$$ otherwise. This is known as effective enumerability. The circularity is even more embarrassingly vivid when in first order logic we’re asked to define the semantics of “∀”, and we say “∀x φ(x) is true whenever φ(x) is true of every object in our model”. Inference rules are functions that take in some set of strings and produce new ones. This amounts to evaluating $$\bar v(\varphi)$$, and the recursive definition of $$\varphi$$ gives a recipe: we evaluate the expressions occurring in $$\varphi$$ from the bottom up, starting with the propositional variables, and using the evaluation of an expression’s components to evaluate the expression itself. Definition (interpretation) An interpretation I assigns a truth value to each atom. If X and Y are grammatical strings, then so are (¬X), (X ∧ Y), (X ∨ Y), and (X → Y). So for instance in propositional logic we have some set of characters designated to be our propositional variables (we’ll denote them p, q, r, …), and another set of characters designated to be our logical symbols (∧, ∨, ¬, →, and parentheses). In other words, there’s a logic within which one can talk about things like numbers and sets, and the semantics of this logic can be made precise with a proof system. Second-order logic has the expressive power to talk categorically about the natural numbers, but it has no sound and complete proof system.