Wikiproofs:Tearoom

(Redirected from WP:TEA)
 Tearoom
 The tearoom is Wikiproof's central discussion page. Feel free to start a new topic or join an existing discussion.
Cup of tea about to be converted into a conjecture by skilled mathematicians

Example discussion

Please sign your post with four tildes.--GrafZahl (talk) 22:15, 5 June 2009 (UTC)

better error message for failed export?

I thought I would see whether I could make any progress on finishing Principia Mathematica propositional logic. A little experimentation seemed to show that if an export fails because the page is missing a thm for one of the stmt in the interface, the error message fails to tell the user which stmt was unproved. Would it be easy to add this to the error message? Kingdon 23:48, 3 January 2010 (UTC)

What was the error message you were getting? You may also want to try the command line version of JHilbert [1] with different debug level settings.--GrafZahl (talk) 08:22, 6 January 2010 (UTC)
The easiest way to reproduce this is to go to Nicod's reduction of Principia Mathematica, remove the line "thm (.Add () () (q .→ (p .∨ q)) (" (and the next three lines which go with it), and click preview. The error is:
export (PRINCIPIA Interface:Principia_Mathematica_propositional_logic () . Unable to export parameter

I would have hoped for something like "export failed: statement Add is not proven". It might turn out that downloading JHilbert and the pages in question is indeed going to be the easiest way to work, but part of what excited me about wikiproofs was the possibility of skipping that step. Kingdon 02:40, 7 January 2010 (UTC)
Should be fixed now. The trouble with the current state of affairs is that JHilbert is the outgrowth of command line utilities and the logging/reporting facilities of JHilbert in command line mode vs. server mode are still quite conflated. I'm working on a rewrite but with all my RL stuff things are going very slowly atm.--GrafZahl (talk) 06:41, 7 January 2010 (UTC)
Thanks, looks much better. Don't get too stressed about it; there probably would have been workarounds to this problem. Kingdon 15:18, 7 January 2010 (UTC)

doubled names

If I look at Interface:Nicod axioms, the rendered page contains:

def (¬¬ p) (p | p))


In case it helps, the HTML for the doubled ¬ is:

<span class="beginexp">¬</span><span class="def">¬</span>


Clicking on "edit" shows only a single character:

def ((¬ p) (p | p))


I believe this started happening within the last day or so.

One other thing (which might not be related, or even a bug at all). Uncommenting both commented out lines at Sandbox shows an error

def (.¬.¬ p Variable not found Variable not found: (cause unknown)

Seems like the last update introduced some sort of bug (a beginexp is an opening parenthesis, nothing else). Maybe I can look into it over the weekend.--GrafZahl (talk) 10:45, 8 January 2010 (UTC)
The name doubling bug is fixed now. The "variable not found" message is not a bug. You must define all variable names anew for each module: variable names are always local. This makes sense as variables usually have very simple (one or two letter) identifiers and you wouldn't want them to clash due to kind mismatches over several modules.--GrafZahl (talk) 14:37, 9 January 2010 (UTC)
Ah, thanks for noticing that. The only connection between the two is that the doubling bug affected the text of the error message (the existence of the message was, as you point out, unrelated). Kingdon 02:08, 10 January 2010 (UTC)

Similar-looking characters

When copying formulas from other web sites, one is sometimes nailed by similar-looking characters. The two which have come up so far are:

• metamath uses ⋀ (U+22C0 N-ARY LOGICAL AND) for its ordinary conjunction character, we use ∧ (U+2227 LOGICAL AND). It is easy to make the case that metamath is just wrong on this one, and at least in my font the two are moderately different from each other.
The history on this is that U+2227 was originally used, but c. 2004(?) I received complaints (and verified for myself) that it displayed as a circle in the Mozilla browser on Windows XP Professional (but not Windows XP Home). So I changed it to U+22C0 as a compromise, and also changed the OR to the n-ary version to match. I don't know if this bug has been fixed but don't have Windows XP Pro to verify it. - Norm Megill 3 March 2012 (I'll sign up for an account later...)
Thanks for chiming in. Perhaps "working around a browser/OS bug" is a more neutral wording than "wrong". I guess my sentiment is that browsers/fonts/OSes have evolved a lot since those days, and there isn't nearly as much reason for this kind of workaround nowadays. I have avoided the 𝒫 (power set) symbol in favor of the word "power", because at least at the time I investigated, it seemed to not work enough places, but I don't remember details. I think I probably checked it with http://browsershots.org/ or something like it. The characters in use on wikiproofs.org (as well as some we avoid) should be listed at Help:Editing. Kingdon 04:17, 4 March 2012 (UTC)
I'll be changing AND back to U+2227 (and similarly for OR) on the Metamath site. Hopefully this very old bug has been fixed. Norm 18:55, 7 March 2012 (UTC)
• We use ⋅ (U+22C5 DOT OPERATOR) for multiplication (at least, that's the one at the bottom of the edit screen). On wikipedia, at least on w:Peano arithmetic, they use · (U+00B7 MIDDLE DOT). These look extremely similar, and I don't really see how we can expect contributors to try to keep them straight.

Is there a mediawiki feature to map all U+00B7 to U+22C5? I'm thinking of the feature of eo.wikipedia.org which maps "cx" to "ĉ" (U+0109 LATIN SMALL LETTER C WITH CIRCUMFLEX) although I'm not sure that particular behavior is exactly what we want (I haven't gotten desperate enough to actually read any mediawiki documentation, so I have no idea how many choices there are).

It turns out to be pretty handy to copy-paste short formulas from one website to another, and modulo some parentheses and such it tends to work fairly well (even with ↔, ∀, ∃, etc), except for the problem characters above. Kingdon 02:37, 23 February 2010 (UTC)

Metamath also uses U+00B7 MIDDLE DOT (for example in sqr2irr). Perhaps the solution to that one is just to switch. Dunno. Kingdon 02:48, 23 February 2010 (UTC)
I've switched the dot in MediaWiki:Edittools for now. On my system, the characters ⋀ and ∧ look markedly different on the rendered webpage, but only moderately different in the editor window (which uses a fixed-width font). Raph and I already discussed this problem some months ago. The correct solution is probably to define equivalence classes of similar-looking characters within JHilbert. That won't happen too soon, though, as there are other, more pressing problems. Meanwhile, it seems sensible to install an equivalence mapping in MediaWiki if it's not too dear. Do you have any idea how the Esperanto Wikipedia people implement their "cx" mapping? Some extension, or some Javascript?--GrafZahl (talk) 09:33, 23 February 2010 (UTC)
The Esperanto wikipedia doesn't rely on javascript; it works without it (and, obviously, doesn't map any characters until the page is submitted). I don't think it is what we want, though, as the character gets mapped back to "cx" for editing. I'm not sure I like the JHilbert solution, as it would lead to pages containing a mixture of different symbols and the reader would be expected to figure out which ones are equivalent (particular in a case like the conjunction, where the wrong symbol really does look wrong, at least to my eyes). None of this is particularly high priority, though (especially since the dot seems managed for now), so I wouldn't lose any sleep over it. Kingdon 13:58, 23 February 2010 (UTC)
Now this is odd. The cx and friends replacement is hard-coded in ./languages/classes/LanguageEo.php of the MediaWiki source. So this is not an easy way to maintain character mappings. The original JHilbert plan was to create equivalence classes only for characters that can be expected to be rendered equally by common fonts (such as the two dots), i.e. roughly the same class of cases where you would apply it to host/domain names for security reasons. For merely similar looking characters, one could implement canonical mappings. This would effectively eliminate non-canonical characters from JHilbert text (but not all Wiki text). Not sure if that's what we really want, so, as you suggest, let's wait and see for now.--GrafZahl (talk) 15:24, 23 February 2010 (UTC)

I propose to add ≠ (U+2260 NOT EQUAL TO) to MediaWiki:Edittools, just to the right of ∃. Kingdon 16:46, 3 March 2010 (UTC)

Done.--GrafZahl (talk) 17:59, 3 March 2010 (UTC)
Thanks, looks good. Kingdon 19:38, 3 March 2010 (UTC)

Kind verification and coercion

I'm concerned about something and want to make sure it gets attention.

In Interface:First-order_logic, subst is introduced, then in First-order_logic defined:

term (formula (subst object object formula))
def ((subst y x φ) (((x = y) → φ) ∧ (∃ x ((x = y) ∧ φ))))


But in Basic_arithmetic it is being invoked quite a bit with terms of kind "nat", rather than "object".

Is kind verification not working? If not, I think there is serious risk of unsoundness.

Also, I realize I don't understand enough about the kind system in JHilbert to know whether terms of one kind can be implicitly coerced into another. In set.mm, then answer is "yes," due to this rule:

  cv $a class x$.


In old Ghilbert, all such syntactical productions need to have explicit s-expression counterparts. Thus, you saw an awful lot of (cv x) for variables that appear in class contexts. The idea was that such things would go away when presenting theorems in the typeset syntax, but it was still awkward and painful to deal with. Thus, getting rid of that was one of the explicit motivations to switch from distinct variables to free variables in the new Ghilbert (thus, the new distinction between var and tvar, for binding and term variables respectively).

So, does JHilbert do implicit coercion from one kind to another? If so, how is it controlled to make sure it's sound? If not, I'm pretty sure there's some serious wrongness going on in the first order logic area here. The previous unsigned comment was made by Raph (talk) 17:53, 3 March 2010.

In JHilbert "coercion" (one form of kindbind) is mutual, meaning that in effect, there is only one kind with two or more names. (I'm pretty sure the old Ghilbert already had this functionality.) In particular, you should not bind set and class as then, all classes would be sets (which means you still need term (class (cv set)) or a similar construct).
You can use kindbind in two ways. The first one is to create a new name for an already existing kind. The second one, available in interfaces only, is to declare two already existing kinds imported via param to be equivalent. Of course, both can be dangerous in their way. The first one in a purely semantic manner: if you declare nat to be another name for object and your universe of discourse is not actually the natural numbers, you'll confuse the hell out of your users. The second one can break things badly if you bind things together that don't belong together, such as formula and object. That's why it's only available in interfaces.
--GrafZahl (talk) 18:56, 3 March 2010 (UTC)
Okay, but I'm pretty sure what I'm talking about is completely separate from kindbind. You cannot alias the kinds for binding variables and terms (set and class in the set.mm universe). If you did, then ∃ (1) ((1) = (2)) would be not only a well-formed expression, but actually a theorem. So I'm pretty confused now what kinds of protection JHilbert has against this. [Fyi, new Ghilbert has very cut-down kindbind, which cannot be used to unify two existing kinds, but only to introduce a new name for an existing kind, which is useful when exporting from an "adapter". This seems obviously sound to me but I am worried it's too restrictive.]
Also, nowhere in the interfaces or proof files referenced above did I find any actual invocation of kindbind. From what I can tell, the term was defined with one vector of kind args, and instantiated with another. Raph 19:24, 3 March 2010 (UTC)
I also answered this at Talk:First-order logic (a few days ago), but the kindbind is in Interface:Peano axioms (which is imported by Basic arithmetic). Kingdon 19:36, 3 March 2010 (UTC)
Ah, sorry to have missed the response there (I do try to follow Recent Changes but sometimes miss things). I'm 99% sure that kindbind is invalid, that your axiomatization of Peano is unsound, and that in particular I can prove the above nonsensical theorem. I'd be happy to be convinced otherwise, but my strong sense is that it needs to be fixed, and I'm not at all sure how to do it in the JHilbert world without adding explicit (cv x) terms. Raph 19:47, 3 March 2010 (UTC)
I think I understand (somewhat) now. If I'm right, object versus nat is a red herring, and the real problem is the attempt to make object do the work of two things which need to be separated. In peano.mm in the metamath distribution, these are "term" and "var" (and "tvar $a term v$." is the equivalent of cv, basically). In set.mm it is basically class versus set (although set.mm has enough other complications that it perhaps isn't the clearest example). In [2], the new ghilbert features (var vs. tvar) handle this. I think I should probably try to fix this for now with explicit cv (unless var and tvar can be implemented pretty fast). It is quite painful for readability (all the n's and m's in ((n · (succ m)) = (n + (n · m)))) need to be (cv n) and (cv m), right?). But the only other choice I can think of is a big disclaimer at the top of First-order logic and friends saying "this is broken until JHilbert gets more features". Kingdon 21:03, 3 March 2010 (UTC)
Oh yes, I see the misunderstanding now, too. The JHilbert command kindbind has nothing at all do to with bound and free variables. It's really only for establishing an equivalence relation among kinds. nat is just an alias for object. The built-in metalogic of JHilbert allows unlimited proper substitution of any single variable in an expression, so you need a different "object" kind for the quantifiers and in your definition of subst. Sorry, I should have paid more attention. And yes, you need explicit coercion terms. I know it's ugly. My plan to fix this ugliness is to add an additional layer above the s-expression representation. But this has lower priority than fixing the metalogic kernel. As of now, I still haven't read the new Ghilbert spec (sorry about that, too) and I'm therefore still undecided as to which of the various proposals to actually prove sound (hopefully) and implement. So you'll have to live with explicit coercion for a while longer.--GrafZahl (talk) 22:22, 3 March 2010 (UTC)
It's OK. If that's the only fundamental error I make on first-order logic, I'll be happy (given how much this treatment differs from textbook versions and how much I'm learning as I go). I hope to start fixing this one tomorrow (using an explicit conversion).

Okay, cool, it seems like we're all on the same page now. Note that the Ghilbert spec is not yet complete - I'm poking at it in moments of spare time. But here are some questions to ponder:

• If we know that the explicit coercions are ugly, and we choose implicit coercion from variable to term, why not also incorporate free variables into the metalanguage? This will eliminate the need for separate foo and foov variants of the same theory (as in set.mm), as well as the stack of hb* theorems to support the use of the former.
• If there is going to be a separate typesetting layer on top of the low-level representation, why not make the low-level as simple as possible and require the term name to come first, before the arguments? Clearly this layer will have to resolve precedence and other similar rules; it seems to me that doing prefix ↔ infix conversion naturally falls into the same category.

In other words, let's take a really close look at the divergences between Ghilbert and JHilbert. The ones that have a really good reason to exist are fine, and we'll write translation scripts so that work can flow between the two systems. But in other cases, I think it makes more sense to resolve by working out which design choice really is best. That's one of the reasons why I'm also writing a Rationale in parallel with the spec itself, to make the reasoning behind those design choices clearer. Raph 23:22, 3 March 2010 (UTC)

I think I agree with what both of you are saying about a layer above the s-expressions except the word "typesetting". I have been assuming this is something which happens on input, not on output. To be even more specific, when I click "edit" I would like to be editing something nice - in infix, with precedence if we get that, and probably with whatever else gets handled at this level. Kingdon 01:34, 4 March 2010 (UTC)
Yes, I'm planning on translation for both input and output. I've already prototyped an input method that lets you type symbols by using Metamath keys - you press "A" and ".", and on the second key the "A" turns into "∀". I have not yet prototyped parsing. One thing that was holding me up was trying to figure out whether I needed to build a seriously general purpose parser (for which I would use some variant of Earley), or whether I could hand-hack something simpler which would deal with the grammar actually in use. I think I'll go with the latter for now, but sometimes it's actually easier to solve the more general problem. Raph 01:59, 4 March 2010 (UTC)
Yes, some convergence would be nice. But I see two different kinds of issues with [GJ]Hilbert. One kind which affects the metalogic, and one kind which affects the user interface, translation front/backends, etc. I would really like to see the metalogical kernel fixed before thinking about anything else. I've posted some of my ideas to this end on Wikiproofs:JHilbert development.
Raph, yesterday I tried to read your Ghilbert documents. There is still a lot of stuff I don't quite get (or maybe I overlooked some documents). So instead of separating kinds, you separate binding variables from term variables. Are term variables allowed to appear as definition dummies in a definiens? How do you specify that a binding variable is not free? I see that there is some extra parameter to the term command, can you give an example?
I understand Carl's proposal a little better and think it should be checked for (un)soundness asap. I'm aware of Raph's concern regarding Carl's separation of def and defthm, and the possibility of using the same definition twice. Maybe we can whittle a counterexample out of that, or see why it is sound nevertheless.
About typesetting, input, output, and parsing. Right now I believe that an Earley parser (or possibly a modification which allows ε-rules) is the best way to go. The user can provide a dynamic grammar quite naturally, by simple alteration of the term command. Earley happily eats all context-free dynamic grammars and provides all parsing ambiguities explicitly. This makes it easy for the user to define nice-looking expressions and provide disambiguators (precedence and associativity). It does have its drawbacks, though. It's not as fast as other parsers, and it's somewhat vague in detecting the reason for a parse error. But typically, single expressions will not be too long.
--GrafZahl (talk) 18:42, 5 March 2010 (UTC)
Sorry, the documents are not complete and are still moving slowly. But hopefully I can answer your questions. No, term variables are not allowed as def dummies. Thus, a definition of true as φ↔φ is not valid. The best way to define true is probably ∀x x=x.
The extra parameter to term expresses that some binding variables may not be free. A canonical example is term (wff ([/] A x ph) (x A)). This specifies that if x occurs free in A, it also appears free in the whole term. Any binding variable mentioned in a term that does not have this additional clause is considered not free in the term. And the list of term arguments is explicit - it is only free occurrences of x in A that pass through; since ph is not mentioned, x is not free in ([/] A x ph) if x is free in ph but not A.
We're probably very similar in our wishes for parsing and typesetting. That said, I'm much happier with divergence in that layer as long as there is convergence on the underlying layer, whether it's exactly the same thing (my preference) or pretty easy to write converters.
Here are just a few random thoughts about what I want for parsing. First, I want the parser to resolve ambiguity in kinds and possibly types (types being an additional explicit argument to quantifiers, and are used in axiomatizations of HOL). My peano axiomatization will have a naive set kind as well as a nat kind (this is mostly for syntactic convenience as it will not allow quantification over sets or, for that matter, any additional proving strength). Thus, there need to be two equality operators (I plan on using = for equality between two nats and =_ for equality between two sets). In the syntax, though, you just say n=m or S=T, and it resolves to the right one. This means that the disambiguation rules need to take into account validity.
Second, I want the parser to handle things often special-cased at the lexical level. Take the representation of decimal numbers, for example. The integer 123 can be represented as the S-expression (|d (1) (|d (2) (3))). It would be awesome for the parser to be able to handle that. Strings and, for that matter, JSON-style structures can probably all fit in there.
Hope this helps, and I'm happy to keep discussing, even as my spec-writing goes slowly. -- Raph 20:13, 5 March 2010 (UTC)
OK, let's see if I get freeness straight. Let x be a binding variable and let E be an expression. If E is x, then x is free in E. If E is (foo E1 E2 … En), then all occurrences of x in E are free if there is a ${\displaystyle k\in \{1,2,\ldots ,n\}}$ such that x occurs free in Ek and the declaration of the term foo, say term (bar (foo A1 A2 … An) &hellip) has (Ak x) as additional clause. In any other case, no occurrence of x is free in E. Is that correct? Does x have to be in the parameter list for foo? I guess not, so you can say term (wff (/\ ph ps) (x ph) (x ps)). But what if I have a theory which has only wff variables (propositional calculus) which I want to use later to build set theory on top of it? Can I introduce freeness later?
About parsing. The nice thing about the kind/var/term commands is that with only slight alterations you can use them to define complex context-free grammars. For example, you might say something like
kind (wff)
kind (nat)
kind (set)
var (nat $n$ $m$)
var (set $S$ $T$)
term (nateq wff (nat $=$ nat))
term (seteq wff (set $=$ set))

(I abused the $sign as delimiter for terminals here, so$ would have to be disallowed in expressions and identifiers). From that, the following grammar would be made:
start:
expression

expression:
wff
nat
set

wff:
nat = nat
set = set

nat:
n
m

set:
S
T

That's an unambiguous grammar with terminals n, m, S, T and =. The syntax tree created by the parser corresponds to an S-expression (for example, n=m would become (nateq n m)). More complex grammars may no longer be unambiguous, but with this scheme, validity is never a problem as each grammar rule (except the start←expression rule) corresponds to precisely one command. For more complex terminals, you'd also need another parser before the Earley stage to break up the input string into terminals. A very simple regular grammar would be sufficient here (codable by hand). Disambiguation would be enforced by disallowing whitespace in terminals and forcing the user to insert whitespace where a terminal ambuguity would appear.
--GrafZahl (talk) 11:59, 7 March 2010 (UTC)

object and variable now split

I've now split object in two everywhere. I'd especially appreciate review of Interface:Axioms of first-order logic and Interface:Peano axioms, because axioms don't have to be proved, and thus don't have that check on errors. I took the approach of using object everywhere there wasn't a quantifier or substitution involved. Some axiomizations use variable some of those places, and then the object versions could be proved from those by substitution. Oh, and the naming. I picked variable for the thing which quantification or substitution applies to, object for the thing which goes in an expression (such as on either side of equals), and value for the explicit conversion between the two. While I'm kind of pleased with this set, other names for variable could be quantifiable, substitutable, placeholder, etc, and other names for value could be toObject, $(by analogy with programming languages like shell or make in which$ means a variable), etc. Kingdon 17:30, 4 March 2010 (UTC)

I did a quick review. The object/variable split seems entirely fine to me, and I also agree with the naming choice - among other things, it's consistent with Ghilbert, which uses "var" for your variable and "num" for your "nat" (I'm perfectly willing to also use "nat" and am not sure why I chose "num" instead). That said, your definition of "subst" is still slightly faulty. In particular, (subst (succ (value n)) n φ) is always false. I'm not sure this actually causes unsoundness, though. My intuition is telling me that the induction rule is valid when k and n are distinct, and in the special case where they're not distinct, there's no way to satisfy the antecedent, so it's safe. My intuition is also telling me that the faulty definition is going to make it impossible to prove some of the other theorems about substitution, such as composition. In particular, sbccog should prove [S x/y][y/x]φ ↔ [S x/x]φ, and that's not true under your definition. This might not be especially urgent to fix, though, as I think the changes should be relatively local to first order logic - the theorems you do export and rely on should all still be valid under the new definition. Raph 19:17, 4 March 2010 (UTC)
Thanks. As for subst, it does look fishy although I'm not sure either what the effects are. Probably just lack of power, rather than unsoundness, since unsoundness could only happen if we hit one of the loopholes in definition safety (I don't think so, as I see no dummy variables), or if the Induction axiom turns out to specify something other than what it thinks it does (slightly more likely). Perhaps the simplest solution is the definition you posted to Talk:First-order logic a while ago. The only thing we're using subst for right now is introduceSubst and makeSubstExplicit, and I don't currently (anymore) have any plans to do anything more ambitious with it. Kingdon 22:37, 4 March 2010 (UTC)
Hi, the Existence axiom stmt (Existence () () (¬ (∀ x (¬ ((value x) = s)))) ) seems to say there is a variable equal to every object, or in metamath language, that there is a set equal to every class. But that is false, see for instance the theorem vnex.--DanKrejsa 02:53, 26 April 2010 (UTC)
What it says is that for each object there is a variable equal to that object. Which is still false, of course. Let's fix it and see what else breaks… --GrafZahl (talk) 17:16, 26 April 2010 (UTC)
What breaks, for instance, is EqualityReflexivity in First-order logic. IMO EqualityReflexivity does not belong there, at least not without saying more about the objects. Something like x = x, on the other hand, could be proven from other axioms. I'll drop Kingdon a note.--GrafZahl (talk) 17:37, 26 April 2010 (UTC)
Hmm. What would "saying more about the objects" entail? The version of EqualityReflexivity with terms seems to be in peano_thms.gh and peano.mm in the metamath distribution, for example. And certainly by the time we get to Basic arithmetic we need symmetry and transitivity with terms (we use them all over the place), which at least at the moment are proved from reflexivity. If I understand vtocle, it can be used to derive the term version from the variable version. Now, the proof of that in metamath relies on [3] which seems to be related to the existence axiom we are debating. I see that Raph's peano_thms.gh seems to have the version of the existence axiom which DanKrejsa is objecting to. If we are OK with having reflexivity, symmetry, and transitivity for terms, I'd say just adopt them as axioms. Many of the treatments I run into seem to do that, and it seems more straightforward than messing around with existence axioms. But if there is some additional condition needed to make them work for terms, then of course we need to know about that. Kingdon 13:55, 27 April 2010 (UTC)
In email, GrafZahl contrasts metamath's eqid which is for objects (classes, in that context) and is proved in terms of set theory, with equid which is proved in terms of logic and is only for variables. Kingdon 14:31, 27 April 2010 (UTC)
I have now made reflexivity, symmetry and transitivity axioms, at least for now. If this causes problems, speak up. The next thing to blow up is ImplicitSubstitutionForAll (similar to equsal in metamath's set.mm). Kingdon 18:49, 3 May 2010 (UTC)
OK, I think I've figured it out.The way that quantification, sets, and classes work in set.mm is just different from the way it works in Raph's peano_thms.gh (as shown by the extra hypothesis in vtocl; more discussion at Interface:Axiom of quantifiability). Raph's way is probably better for Peano arithmetic and ZF set theory, although some other set theories might find it too constraining. I've separated them into different pages, although at least for the present I plan to build on top of Interface:First-order logic with quantifiability rather than Interface:First-order logic. Feedback would be most welcome (in particular, I just made up the name "axiom of quantifiability" and don't really know whether there is something similar in the literature). Kingdon 22:30, 11 May 2010 (UTC)

Abstract versus Concrete Syntax

Hi Grafzahl and friends...

I finally found your note at http://wiki.planetmath.org/AsteroidMeta/ocat Thanks for the heads up! I have busied myself during the last 14 months with earning my keep, and frankly, a bit of a rest up from my mmj2 exertions. Going forward I am undecided about where to focus my programming energies, but there are aspects of Metamath-related work that interest me. I think some of these may be relevant to your J/Ghilbert work, though I do not presume to know anything more than I have already coded and tested in mmj2 :-)

One primary topic concerns the grammars -- i.e. syntax and presentation, on input, output and internal. By "abstract" I mean the "normal" form used internally after unambiguous parsing into tree form. "Concrete" refers to external forms, which for example, may include redundant parentheses. An example of concrete is 'a + b + c' and 'a + (b + c)' being equivalent to '(( a + b) + c)' (which is the abstract form) -- noting that '(a + (b + c))' may be equal to '((a + b) + c) but that is something that must be proved or given as a "law" or axiom.

What we see in Metamath is that the abstract (internal) grammar/syntax is identical to the concrete grammar/syntax. Metamath cannot tolerate redundant parentheses, and even if it could somehow ignore them in specific cases there is no defined mechanism for translating between abstract and concrete grammatical forms; a 'concrete' parse tree would require a different proof than an equivalent 'abstract' parse tree, absent a grammatical translator.

So, for ease of use we would like to see abstract and concrete grammars available. Concrete syntax(es) statements would be parse and then translated to abstract form for processing by the proof engine(s), and then a reverse translation(s) would be performed for presentation output.

In addition, it might be very useful to develop a grammar generator which would accept as input the typical sort of operator hierarchy table and output context free grammars -- both abstract and concrete.

Finally, symbol translation tables could be developed to map between different languages.

p.s. you can email me in the future at k900f73 via yahoo.

Mellocat 18:13, 27 March 2010 (UTC)Mellocat 27-Mar-2010

Good to see you here!
I have to say, I'm a little confused by the distinction between abstract and concrete grammar. To me, the meaningful distinction is between the low-level grammar, which is (hopefully) pretty easy to parse and unambiguous, and the presentation layer that may sit above it, which is designed to be much closer to traditional mathematics. In some cases, the low-level grammar makes some concessions towards being more readable (i.e. both JHilbert and Metamath use infix), but in other cases not. Perhaps your distinction is more relevant in HOL, where there is both an accessible internal representation (using mk_abs to construct a lambda abstraction, for example) and a more palatable printed syntax used in virtually all communication with the user, still ASCII-based though.
Further, in presentation you can do quite a bit more than just grammar. For one, you can use 2-d layout, like ${\displaystyle \Sigma _{i=1}^{\infty }}$ or ${\displaystyle e^{i\pi }}$ . It's possible to imagine a number of different input methods that would generate these types of results, You can also tweak the spacing to make operator precedence more visually distinct (as I have done for the math typesetting in my interactive app, as well as the experiments generating PDF).
I think the big confusion for me is: what do you expect to be stored in files? You can't store the full 2D graphical representation, so perhaps there's an intermediate grammar, something with Unicode and maybe markup "e<sup>iπ</sup>", for example. Or another possibility (pretty much what I'm doing so far) is to store everything in the low-level representation, and do the presentation on-the-fly.
Anyway, these are just some random thoughts. Nice to have you as part of the conversation, and hope to see you around more. Raph 00:58, 28 March 2010 (UTC)
Hi. I should have said "syntax" instead of "grammar", perhaps.
This link has a discussion of abstract vs. concrete: http://nat.truemesh.com/archives/000531.html
The important quality for syntaxes is that the parse tree is unambiguous to the intended audience, whether a human or a program. Internally the abstract syntax is boiled down to, say, for '->', just the name/code of implication and the names/codes of the two operands, in a specific order. Externally there may be multiple input and output syntaxes, just so long as their meaning is unambiguous. Redundant parentheses may be present, or as you mention, the output syntax may be MathML, Tex, or whatever. There is a necessary transformation from/to abstract and concrete, and there is a necessary rendering layer for concrete syntax representations, as well as an input parser for each time language/data must be input, validated and converted into syntax trees in memory.
What is stored in the file/database is up for discussion. I would probably want to be able to store the specifications for abstract and concrete syntaxes separately from the math content. Like "normal form" in database design. That facilitates gathering math content from different sources and sharing without having to modify program code.
This all sounds laborious and cruel, but going back and making changes later is worse.

Mellocat 14:19, 28 March 2010 (UTC)Mellocat

Ah, I think I understand better. Unless I'm still not getting it, the concrete syntax is what's in the file, and the abstract syntax is the data structure used in the implementation. In Ghilbert, I've been extremely careful to separate the interchange format from the implementation; the latter is basically free to do whatever it likes as long as it gets the proof validity predicate right. In particular, there's no programmatic API for accessing terms. This approach is in stark contrast with HOL (see p. 16 for a 12-line expression representing the abstract syntax of the expression \x. x+1). In any case, I'd like to continue this discussion further in Wikiproofs:Syntax_Discussions, and have no problems with anyone refactoring the syntax discussion thread on this page over there. -- Raph 22:38, 28 March 2010 (UTC)
A good example of the main "problem" (not that Metamath isn't perfect :-) I am first discussing is seen in the ramifications of defining the ternary "and" syntax axiom w3a (vs. binary "and" syntax axiom "wa".) And, because Metamath requires the concrete to be identical to the abstract syntax, extra parentheses introduced by the user for readability are prohibited. Also (then I would switch to your new page), compare the Metamath "operator" syntax to say, Python's (e.g. SageMath) -- I don't expect an individual to master multiple syntaxes, but different syntaxes might/ought be available, depending on the user's preference. Finally...the are 4 unary boolean operators, 16 binary boolean operators, and yada-yada ternary boolean operators -- their "names" are essentially arbitrary, but their identities are established across the world of math/computer science via their truth tables, so translations between different systems/languages should be programmatic (i.e. I shouldn't care that Norm refers to "wi" while you refer to "->" while Grafzahl refers to Unicode somthing-or-other, just so long as I have a translation table and syntax converter code.) Mellocat 00:03, 29 March 2010 (UTC)Mellocat
P.S. As an interesting side note about what can feasibly be done, about six months ago I upgraded my computer system to a dual Pentium R S5300 2.60GHz 64 bit processor/OS with 6GB RAM, and what is surely a faster hard drive. I also downloaded the latest JDK, 1.6.18, 64 bit version and recompiled mmj2 (though recompilation was not necessary.) Initial startup including starting the JVM, verifying all Metamath proofs, parsing, and display of the mmj2 GUI screen took 5.554 seconds. With my old machine that process took about 16 seconds. And consider that my new machine, a desktop, cost only $400. Mellocat 14:38, 28 March 2010 (UTC)Mellocat pointer to more thoughts on definitions I've posted a comment at Wikiproofs talk:JHilbert definition soundness regarding my (now somewhat better informed) thoughts on how we should redesign the definition mechanism. Followup discussion is probably better conducted there than here. Kingdon 15:49, 27 June 2010 (UTC) New at www.wikiproofs.de Hiya I’m new here. I’m sorry if "/w/index.php?title=Wikiproofs:Tearoom" is not the right place for this post. My name is Rob I am from BRazil Hi, welcome to the site. Was there any kind of contribution you were particularly interested in? There's plenty to do, from writing proofs to software development to writing explanatory text. Kingdon 00:13, 19 August 2010 (UTC) Feedback on main page Hello, a friend of mine took a look at wikiproofs and was kind enough to write up his first impressions. Here is what he said: The main thing I was looking for on my first visit was a list of available proofs, or at least some examples. Also a simple example ("hello world".) I see now that Help:Tutorial counts as "hello world" - maybe you need a more prominent link for that? Also, for some reason I skimmed right over the "featured article" linking to Principa Mathematica propositional logic.  I've taken a shot at reorganizing the home page so that these things are more prominent at User:Kingdon/Main page (diffs from current main page). Mostly this draft just moves things around, but I did trim out some verbiage and I'd also propose deleting Subject index (it is superceded by some new main page text and of course things like the categories). Reactions? Kingdon 02:39, 25 February 2011 (UTC) Sorry for taking so long to answer. I've been struggling with a bad cold since Tuesday. Thank you for your efforts with the main page. I like the new design of the blue box, and also that the featured article stands out better. I'm also OK with deleting the subject index page for now, though at some time, we might need something better than the MSC category. We should do something about the redlinks in the pink box. Maybe just remove them for now. When I'm well again, I'd like to restart the definition discussion we had with Carl and Raph about a year ago. It seems to have stopped just before interesting examples for the two proposed safe definition schemes were posted. Once I can implement something in my github repo which satisfies Carl and Raph, we will have come one step (of many) closer to a JHilbert specification. Anyway, just go ahead boldly. If there are still little things to fix, I can do so later.--GrafZahl (talk) 22:13, 27 February 2011 (UTC) Hope you are feeling better soon and no worries about the delay. I've gone ahead and implemented those changes, and also deleted the redlinks in the pink box. As for the subject index, I agree that we may need more navigation at some point, but it will be easy enough to recreate when someone gets to it (and it might take another form(s) like a navbox, or even some fancy software which ties into the import/export and can do things like "this theorem/module relies on these axioms/interfaces" across modules). Likewise for the redlinks - easy enough to recreate if/when anyone gets to that point. As for the definitions, glad to see you have gotten to the point of being ready to tackle them. I assume you've at least glanced at the ghilbert implementation (which as far as I know is at [4]). Kingdon 02:43, 28 February 2011 (UTC) How to create private sandbox? I would like to learn writing proofs for JHilbert. I know about the official Sandbox. Can I create a private (not publicly viewable) sandbox just for myself that I can reuse? Slawekk 08:45, 11 April 2011 (UTC) There's definitely a place where you can experiment freely and not collide with other people's experiments. Just create a page like User module:Slawekk/Sandbox (doesn't have to be called Sandbox, you can give it any name you'd like). This is publicly visible in the sense that someone who knows the name can load it, but searches don't return it (unless the person searching explicitly asks to search the User module namespace), so perhaps that it private enough (depending on why you were asking for that). Kingdon 17:10, 11 April 2011 (UTC) Automated translation from metamath The more I look at metamath's set.mm, the more stuff I realize is there. Although I also have some stylistic issues with set.mm, and of course the metamath engine has some issues which led to the work on ghilbert/jhilbert, it would also be nice to be able to build on metamath. So my questions are this: (1) is there a published tool to produce translations like prop.ghi in ghilbert?, and (2) do people have any objection (or opinions pro/con) to importing this stuff to wikiproofs? What I had in mind was creating interfaces for the sections in the metamath theorem list, for example Interface:Propositional calculus (metamath), Interface:ZF Set Theory - add the Axiom of Power Sets (metamath) and so on. I did not have in mind trying to pull over proofs, just interfaces. Kingdon 16:49, 19 May 2011 (UTC) I don't know of such a tool specifically for JHilbert (I haven't written one). If there is one for Ghilbert, you can probably base your work on that; the differences are currently not that big. I don't have objections against creating interfaces matching the metamath theorem list (Norm has released set.mm under CC0, so it should go well with our own licence).--GrafZahl (talk) 20:16, 23 May 2011 (UTC) Thanks; I'll see if Raph can point me to his tool (which I suppose will need some changes) and then I'll see how much of a time sink this ends up being. My top priority right now is getting Interface:Trigonometry ready for tau day in about a month, so I'm going to make sure I have something up there (prepared manually) before I get too deeply into messing around with tools. Kingdon 10:52, 26 May 2011 (UTC) $wgCapitalLinks

Can we change the setting of $wgCapitalLinks now while the wiki is small and it is easy? Right now we have the default behavior, in which p-adic number (for example) gets turned into P-adic number (and then, on wikipedia they have hacks like the lowercase template or DISPLAYTITLE to try to partially recover, see w:p-adic number). We're better off with just setting$wgCapitalLinks to false like they do on wiktionary: see [5]. Kingdon 10:58, 26 May 2011 (UTC)

Hi, I've disabled capital links for the relevant namespaces.--GrafZahl (talk) 22:18, 28 May 2011 (UTC)
Hmm, didn't realize it would be per-namespace. Anyway, looks good. If I link to lowercase pi (π) it really links to lowercase pi, not uppercase pi as wikipedia would. Kingdon 23:36, 30 May 2011 (UTC)

Math goes Pop on Tau day

We got a mention on the math blog Math Goes Pop. Personally, I'm enjoying the tau day buzz even if it is a tau day buzz more than a wikiproofs buzz (one step at a time!). My co-workers and I went out for pie at lunch and sat on round bar stools (choosing those over rectangular chairs). Kingdon 22:15, 28 June 2011 (UTC)

Second axiomization of first-order logic

Hi, although I haven't been spending a lot of time on talk pages I have been doing wikiproofs stuff. Some of it is JHilbert work. But the last few months have mostly been spent on a second axiomization of first-order logic, which is now live (and complete) at First-order logic in terms of substitution built on equality). My immediate motive was to play around with intuitionistic first-order logic, but there are any number of reasons to have an axiomization which is more closely based on Margaris (and other conventional first-order logic axiomizations). It was a bit of a surprise to me how concepts like "is not free in" or substitution have a relatively natural mapping to recursive definitions, which can be expressed in JHilbert. Right now, especially in the case of substitution, this axiomization doesn't really take things all the way, but instead ties in substitution into equality somewhat the way metamath does (and I probably adopted more substitution statements as axioms than I needed to). But as far as I can tell, one could come up with a theory of first-order logic without equality using the existing JHilbert verifier (the builder axioms, scattered throughout every axiom set which builds on first-order logic, would need modification, though, which was my main reason for adopting the halfway approach). It then might be possible to add some equality axioms to that theory to get Interface:First-order logic with quantifiability.

Anyway, I thought I should highlight this development and see if there are any reactions. I suppose I should also ask people like Norm Megill or Raph Levien what they think. Kingdon 06:54, 16 February 2012 (UTC)

Hi, just to let you know: I'm still watching the changes to the wiki, and, once again, thank you for your contributions. As for your question, I have to admit that, without actually diving into the matter, I'm out of my depth here. I hope someone of the others can help you.--GrafZahl (talk) 20:27, 23 February 2012 (UTC)
Thanks for the response. I found that this is (at least roughly) one of the unsolved problems on the metamath site: 17 so I'm hoping I can interest Mr. Megill once I put a few finishing touches on it. Kingdon 05:32, 27 February 2012 (UTC)

ghilbert

I've begun to play around with writing a tool to convert the proofs now on wikiproofs to ghilbert (see [6]). Right now this is just an hour or work (or something) and will require significant work to convert any real proofs, but I thought I should at least mention its existence before someone stumbles across it on github and wonders what I'm up to. At the moment this is all pretty experimental, but in rough terms the idea is to hook up the body of proofs in wikiproofs (which are larger than anything which has been written for ghilbert so far, and also have some appealing properties like theorems which are named less cryptically than set.mm) with the ghilbert proof engine (which most notably has a better definition mechanism than jhilbert) and the somewhat greater community of people using ghilbert. Like I say, I'm not trying to make any concrete proposals about what (if anything) might come out of any of this, but I did want to start experimenting (in public, since doing it in private is kind of silly) and that seems to call for telling people about it. Kingdon (talk) 07:32, 21 October 2012 (UTC)

An early draft of this is now up. See [7] for the conversion of a wiki page (this part is still very rough, issues include doing something about the templates (do you plan to add templates to the ghilbert wiki?), dealing with footnotes (do you plan to add a footnote feature similar to the mediawiki one?), linking to the proofs (where they had been inline on wikiproofs), fixes to formatting (the converter isn't smart enough about things like asterisks, apparently), and probably more). A bit farther along are the conversions of the proofs. There's an interface at [8] which passes the ghilbert verifier, and a proof module at [9] which needs some work on the biconditional (kind of a special case, as ghilbert's defthm mechanism is sort of tied to biconditionals and equalities, if I understand it right), and definitions in general (things like ⊤ and ⊥ should be a fairly easy automated conversion to defthm, although it isn't done yet). But it doesn't seem like a huge amount of work to get this verifying, and presumably doing other modules will get easier (with the definitions being the big hangup; they will often require manual work). The tool which did the conversion is jh2gh. Kingdon (talk) 05:42, 25 November 2012 (UTC)
Progress to date is at [10]. The wiki formatting has improved somewhat and more stuff is passing the ghilbert verifier. Discussion has mostly been on the ghilbert mailing list but I'm also happy to discuss any aspect of this here too. Kingdon (talk) 04:27, 29 December 2012 (UTC)

Is anyone still here?

All the recent activity I see appears to be spambots. --Dfeuer (talk) 19:23, 24 April 2013 (UTC)

Yep, I'm still here. Wikiproofs is not really active at the moment, though.--GrafZahl (talk) 16:58, 29 April 2013 (UTC)
Is there a reason why it is not active?

For instance, is there another site where people are actively working to finish all of the major proof formalization? Metamath is not nearly so easy to access, nor as elegant as a wiki. I would like to contribute towards something, but not if nobody else is going to participate.--Mattgately (talk) 02:54, 18 April 2014 (CEST)

Yes, there is one other project, Ghilbert [11].
About the activity on this wiki, well
• Not very many people appear to be interested in formalisation in general. Maybe a few dozen in the world.
• To protect against spambots, the hurdle of getting an account in this wiki is quite high. I wish it could be otherwise, but those spambots were really killing me, creating thousands of spam pages within a few weeks.
• Real life has got a hold on me, so I have very little time left for this project. I had quite of hoped with enough people around, this would be a self-supporting thing, but it never gained enough momentum.
I have a few projects in the pipeline to make Wikiproofs a worthwhile experience (better parser, better display, proof editor) and hopefully attract more people, but I'm not getting anything done right now. Sorry about that.--GrafZahl (talk) 03:12, 18 April 2014 (CEST)