Published: 10th August 2022
DOI: 10.4204/EPTCS.366
ISSN: 2075-2180

EPTCS 366

Proceedings
End-to-End Compositional Models of Vector-Based Semantics
NUI Galway, 15-16 August 2022

Edited by: Michael Moortgat and Gijs Wijnholds

Preface
Michael Moortgat and Gijs Wijnholds
Invited Presentation: A Tale of Four Disciplines for All Ages and All Languages
Bob Coecke
1
Invited Presentation: Perspectives on Neural Proof Nets
Richard Moot
5
Invited Presentation: Ambiguous Definite Pronoun Resolution via Lambek Calculus with Soft Sub-Exponentials and Machine Learning
Mehrnoosh Sadrzadeh
8
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy and Shalom Lappin
9
A Model of Anaphoric Ambiguities using Sheaf Theoretic Quantum-like Contextuality and BERT
Kin Ian Lo, Mehrnoosh Sadrzadeh and Shane Mansfield
23
Word-Embeddings Distinguish Denominal and Root-Derived Verbs in Semitic
Ido Benbaji, Omri Doron and Adèle Hénot-Mortier
35
Language-independence of DisCoCirc's Text Circuits: English and Urdu
Muhammad Hamza Waseem, Jonathon Liu, Vincent Wang-Maścianica and Bob Coecke
50

Preface

The workshop End-to-End Compositional Models of Vector-Based Semantics was held at NUI Galway on 15 and 16 August 2022 as part of the 33rd European Summer School in Logic, Language and Information (ESSLLI 2022).

The workshop was sponsored by the research project `A composition calculus for vector-based semantic modelling with a localization for Dutch' (Dutch Research Council NWO 360-89-070, 2017-2022). The objective of the project was to provide a collection of computational tools and resources for the compositional distributional study of Dutch.

The workshop program was made up of two parts. The first part reported on the results of the aforementioned project, the second part consisted of contributed papers on related approaches. The present volume collects the contributed papers and the abstracts of the invited talks. The website https://compositioncalculus.sites.uu.nl provides more information on publications, tools and resources resulting from the sponsoring project.

We thank the members of the Programme Committee for their efforts, and the Dutch Research Council NWO for its financial support.

Program Committee

Michael Moortgat and Gijs Wijnholds
Utrecht, July 2022


A Tale of Four Disciplines for All Ages and All Languages

Bob Coecke (Quantinuum)

Our tale starts with John von Neumann's rejection of his own quantum mechanical formalism (Redei, 1996), goes via Erwin Schrödinger's conviction that composition is at the heart of quantum theory (Schrödinger, 1935) and Roger Penrose's diagrammatic substitute for tensor notation (Penrose, 1971), and a first pit-stop at categorical quantum mechanics (CQM) (Coecke 2021). First presented as a high-level language for quantum computing (Abramsky and Coecke, 2004), it has evolved in an educational tool (Coecke & Kissinger, 2017), first university level, and now also high-school level (Coecke and Gogioso, 2022), or maybe even earlier. Experiments are underway to verify this.

Going back in time to 2005, when Jim Lambek attended our presentation of quantum teleportation in diagrammatic terms, he yelled: ``Bob, this is grammar!"

Hence, when Steve Clark and Steve Pulman put the challenge forward about how to combine meaning and grammar, in a manner that sentence meanings all live in the same space (Clark and Pulman, 2007), we knew that CQM provided the answer, and did so in a canonical manner (Clark et al., 2008; Coecke et al., 2010; Clark et al., 2014). The resulting framework has been referred to as DisCoCat. While in most presentations pregroup grammar (Lambek, 1999; Lambek, 2008) has been used, the framework works for any kind of type grammar (Coecke et al., 2013, Yeung and Kartsaklis, 2021).

Moreover, DisCoCat does not only consider linguistic structure, but also uses the spiders of CQM to provide `logical' accounts of words like relative pronouns (Sadrzadeh et al., 2013; Sadrzadeh et al., 2016; Kartsaklis, 2016; Coecke et al., 2018; Coecke, 2021).

But of course, a quantum model's natural habitat is a quantum computer (Feynman, 1982), and after a series of theoretical explorations (Zeng and Coecke, 2016; Meichanetzidis et al., 2020; Coecke et al., 2020), we performed quantum natural language processing on quantum hardware (Meichanetzidis et al., 2020;Lorenz et al., 2021), and now you can do so too, making use of our open-source Python library lambeq (Kartsaklis et al., 2021). What lambeq does, is turning any sentence in a circuit that can then be compiled on any kind of quantum hardware, e.g. via Quantinuum's tket (Sivarajah et al., 2020).

One limitation of DisCoCat is that it does not allow for sentence meanings to be composed into text, sentence meanings being vectors. On the other hand, circuits are things that do easily compose. Prior to our quantum implementations, we had already proposed DisCoCirc as an improvement on DisCoCat (Coecke, 2021). By taking nouns as inputs rather than as vectors, and substituting sentence types by noun-types, we obtain composable circuits:

Now, noun-meanings aren't fixed, but evolve as text progresses (Coecke and Meichanetzidis, 2020; De las Cuevas et al., 2020), and they will also entangle as the corresponding agents are interacting, like in the above picture.

When we attempted to realise DisCoCirc for all of English (Wang-Maścianica et al., 2022), we were up for a surprise: we discovered that in the passage to circuits differences between languages in terms of word-ordering vanished (Waseem et al., 2022), and so did stylistic differences:

These circuits are moreover generative: any circuit that one can put together always corresponds to some text, although of course that text won't be unique.

One may hope that therefore DisCoCirc brings us closer to what meaning actually is, independent from the bureaucracy that comes into play when forcing language to live on a line, as we humans do. For example, it is much closer to spatial reasoning (Wang-Maścianica and Coecke, 2021).

References

S. Abramsky and B. Coecke. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science (LICS), pages 415-425, 2004. arXiv:quant-ph/0402130. doi:10.48550/arXiv.quant-ph/0402130

S. Clark, B. Coecke, E. Grefenstette, S. Pulman, and M. Sadrzadeh. A quantum teleportation inspired algorithm produces sentence meaning from word meaning and grammatical structure. Malaysian Journal of Mathematical Sciences, 8:15-25, 2014. arXiv:1305.0556. doi:10.48550/arXiv.1305.0556

S. Clark, B. Coecke, and M. Sadrzadeh. A compositional distributional model of meaning. In Proceedings of the Second Quantum Interaction Symposium (QI-2008), pages 133-140, 2008. http://www.cs.ox.ac.uk/people/stephen.clark/papers/qai08.pdf

S. Clark and S. Pulman. Combining symbolic and distributional models of meaning. In Proceedings of AAAI Spring Symposium on Quantum Interaction. AAAI Press, 2007. https://www.aaai.org/Papers/Symposia/Spring/2007/SS-07-08/SS07-08-008.pdf

B. Coecke. Kindergarten quantum mechanics. In A. Khrennikov, editor, Quantum Theory: Reconsiderations of the Foundations III, pages 81-98. AIP Press, 2005. arXiv:quant-ph/0510032. doi:10.48550/arXiv.quant-ph/0510032

B. Coecke. Quantum picturalism. Contemporary Physics, 51:59-83, 2009. arXiv:0908.1787. doi:10.48550/arXiv.0908.1787

B. Coecke. The Mathematics of Text Structure, pages 181-217. Springer International Publishing, 2021. arXiv:1904.03478. doi:10.48550/arXiv.1904.03478

B. Coecke, G. de Felice, K. Meichanetzidis, and A. Toumi. Foundations for near-term quantum natural language processing, 2020. arXiv:2012.03755. doi:10.48550/arXiv.2012.03755

B. Coecke and S. Gogioso. Quantum in Pictures. Quantinuum, September 2022.

B. Coecke, E. Grefenstette, and M. Sadrzadeh. Lambek vs. Lambek: Functorial vector space semantics and string diagrams for Lambek calculus. Annals of Pure and Applied Logic, 164:1079-1100, 2013. arXiv:1302.0393. doi:10.1016/j.apal.2013.05.009

B. Coecke, D. Horsman, A. Kissinger, and Q. Wang. Kindergarden quantum mechanics graduates... or how i learned to stop gluing LEGO together and love the ZX-calculus. Theoretical Computer Science, 897:1-22, 2022. arXiv:2102.10984. doi:10.48550/arXiv.2102.10984

B. Coecke and A. Kissinger. Picturing Quantum Processes. A First Course in Quantum Theory and Diagrammatic Reasoning. Cambridge University Press, 2017. doi:10.1017/9781316219317

B. Coecke, M. Lewis, and D. Marsden. Internal wiring of cartesian verbs and prepositions. In M. Lewis, B. Coecke, J. Hedges, D. Kartsaklis, and D. Marsden, editors, Procs. of the 2018 Workshop on Compositional Approaches in Physics, NLP, and Social Sciences, volume 283 of Electronic Proceedings in Theoretical Computer Science, pages 75-88, 2018. doi:10.4204/EPTCS.283.6

B. Coecke and K. Meichanetzidis. Meaning updating of density matrices. Journal of Applied Logics, 2631(5):745, 2020. doi:10.48550/arXiv.2001.00862

B. Coecke, M. Sadrzadeh, and S. Clark. Mathematical foundations for a compositional distributional model of meaning. In J. van Benthem, M. Moortgat, and W. Buszkowski, editors, A Festschrift for Jim Lambek, volume 36 of Linguistic Analysis, pages 345-384. 2010. arXiv:1003.4394. doi:10.48550/arXiv.1003.4394

G. De las Cuevas, A. Klingler, M. Lewis, and T. Netzer. Cats climb entails mammals move: preserving hyponymy in compositional distributional semantics. arXiv:2005.14134 [quant-ph], 2020. doi:10.48550/arXiv.2005.14134

R. P. Feynman. Simulating physics with computers. International journal of theoretical physics, 21:467-488, 1982.

D. Kartsaklis. Coordination in categorical compositional distributional semantics. In Semantic Spaces at the Intersection of NLP, Physics and Cognitive Science, 2016. arXiv:1606.01515. doi:10.4204/EPTCS.221.4

D. Kartsaklis, I. Fan, R. Yeung, A. Pearson, R. Lorenz, A. Toumi, G. de Felice, K. Meichanetzidis, S. Clark, and B. Coecke. lambeq: An efficient high-level Python library for quantum NLP. arXiv preprint arXiv:2110.04236, 2021. doi:10.48550/arXiv.2110.04236

J. Lambek. Type grammar revisited. Logical Aspects of Computational Linguistics, 1582, 1999. doi:10.1007/3-540-48975-4_1

J. Lambek. From word to sentence. Polimetrica, Milan, 2008. https://www.math.mcgill.ca/barr/lambek/pdffiles/2008lambek.pdf

R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke. Qnlp in practice: Running compositional models of meaning on a quantum computer. arXiv preprint arXiv:2102.12846, 2021. doi:10.48550/arXiv.2102.12846

K. Meichanetzidis, S. Gogioso, G. de Felice, A. Toumi, N. Chiappori, and B. Coecke. Quantum natural language processing on near-term quantum computers. Accepted for QPL 2020, 2020. arXiv preprint arXiv:2005.04147. doi:10.4204/EPTCS.340.11

K. Meichanetzidis, A. Toumi, G. de Felice, and B. Coecke. Grammar-aware question-answering on quantum computers. arXiv preprint arXiv:2012.03756, 2020. doi:10.48550/arXiv.2012.03756

R. Penrose. Applications of negative dimensional tensors. In Combinatorial Mathematics and its Applications, pages 221-244. Academic Press, 1971. http://homepages.math.uic.edu/~kauffman/Penrose.pdf

M. Redei. Why John von Neumann did not like the Hilbert space formalism of quantum mechanics (and what he liked instead). Studies in History and Philosophy of Modern Physics, 27(4):493-510, 1996. citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.763&rep=rep1&type=pdf

M. Sadrzadeh, S. Clark, and B. Coecke. The Frobenius anatomy of word meanings I: subject and object relative pronouns. Journal of Logic and Computation, 23:1293-1317, 2013. arXiv:1404.5278. doi:10.1093/logcom/ext044

M. Sadrzadeh, S. Clark, and B. Coecke. The Frobenius anatomy of word meanings II: possessive relative pronouns. Journal of Logic and Computation, 26:785-815, 2016. arXiv:1406.4690. doi:10.1093/logcom/exu027

E. Schrödinger. Discussion of probability relations between separated systems. Cambridge Philosophical Society, 31:555-563, 1935. doi:10.1017/S0305004100013554

S. Sivarajah, S. Dilkes, A. Cowtan, W. Simmons, A. Edgington, and R. Duncan. t|ket>: A retargetable compiler for NISQ devices. arXiv preprint arXiv:2003.10611, 2020. doi:10.48550/arXiv.2003.10611

V. Wang-Maścianica and B. Coecke. Talking space: Inference from spatial linguistic meanings. Journal of Cognitive Science, 22(3):421-463, 2021. doi:10.48550/arXiv.2109.06554

V. Wang-Maścianica, J. Liu, and B. Coecke. Distilling text into circuits. arXiv:soon.

J. Waseem, M. H. Liu, V. Wang-Maścianica, and B. Coecke. Language-independence of DisCoCirc's text circuits: English and Urdu. This volume.

R. Yeung and D. Kartsaklis. A CCG-based version of the DisCoCat framework. arXiv:2105.07720 [cs, math], 2021. doi:10.48550/arXiv.2105.07720

W. Zeng and B. Coecke. Quantum algorithms for compositional natural language processing. Electronic Proceedings in Theoretical Computer Science, 221, 2016. arXiv:1608.01406. doi:10.4204/EPTCS.221.8


Perspectives on Neural Proof Nets

Richard Moot (CNRS/LIRMM Montpellier)

Introduction

Proof nets are a way of representing proofs as a type of (hyper)graph. Proof nets were originally introduced for linear logic (Girard 1987). Proof nets can be seen as a parallelised sequent calculus which removes inessential rule permutations, but also as a multi-conclusion natural deduction which simplifies many of the logical rules (notably the E, E, E rules). This make proof nets a good choice for automated theorem proving: avoiding needless rule permutations entails an important reduction of the search space for proofs (compared to sequent calculus, and to a somewhat lesser extent when compared to natural deduction) but still allows us to compute the lambda terms corresponding to our proofs: enumerating all proof nets for a different sequent is equivalent to enumerating all its different lambda terms.

Proof nets can be adapted to different types of type-logical grammars while preserving their good logical properties (Moot 2021). This makes them an important tool for testing the predictions of different grammars written in type-logical formalisms. Combining the lambda terms produced by proof search with lambda terms assigned to words in the lexicon places type-logical grammars in the formal semantics tradition initiated by Montague (Montague 1974; Moortgat 1997).

The `standard' approach to using proof nets for natural language processing is the following.

  1. Lexical lookup The lexicon maps words to formulas. In general, a word can have multiple formulas assigned to it, so this step is fundamentally non-deterministic.

  2. Unfold We write down the formula decomposition tree.

  3. Link We enumerate the 1-1 matchings between atomic formulas of opposite polarity, similar to the way a resolution proof links atoms with their negations.

  4. Check correctness We check whether the resulting structure satisfies the correctness conditions1.

This approach is standard because it corresponds naturally to the way of writing grammars in type-logical grammars: the grammar writer provides a lexicon and then tests his grammar on sentences using the words in the lexicon. The task of the theorem prover is then to find all different proofs/lambda terms, and this allows type-logical grammarians to test the predictions of their grammars by computing the different meanings of the sentences in their fragment.

Neural proof nets

Proof enumeration for type-logical grammars works well in a research context, with relatively small lexicons and sentences. But how can we `scale' type-logical grammar parsing to use them for natural language understanding tasks? The standard approach has two main bottlenecks. First, lexical ambiguity (that is, words can be assigned many different formulas) becomes a major problem. Second, the matching step is formally equivalent to finding a correct permutation (under some constraints). For both of these bottlenecks, we are no longer interested in enumerating the different possibilities, but in finding the correct solution, where we mean correct in the sense that it corresponds to the intended interpretation of the sentence (more prosaically, this means it should be the same as the proof of the sentence we find in a given corpus).

Different machine learning approaches have been successfully applied to the first task under the label `supertagging' (Bangalore and Joshi 2011). As in many other research areas, deep learning has greatly improved the accuracy on this task. For example, for the French TLGbank (Moot 2015), a maximum-entropy supertagger obtains 88.8% of the assigned formulas correct whereas the current state-of-the-art using neural networks is at 95.9% (Kogkalidis and Moortgat 2022).

The second task, linking the atomic formulas, has only recently been implemented using a neural network (Kogkalidis, Moortgat, and Moot 2020b). This provides a complete neural network solution for parsing arbitrary text using type-logical grammars: neural proof nets. Neural proof nets have been successfully applied to the Dutch Æthel dataset (Kogkalidis, Moortgat, and Moot 2020a), and I will present new data on our large-scale experiments with the French TLGbank.

Different perspectives

While we have seen important improvements in supertagging, it remains an important bottleneck for neural proof nets: a single error in the formula assignment can cause a proof to fail. In addition, compared to standard parser evaluation metrics, it is hard to measure partial successes in neural proof nets: we can calculate the percentage of words assigned the correct formula, and, given the correct formulas, we can calculate the percentage of correct links, but it is not obvious how to calculate a score combining these two percentages into a single useful metric other than the extremely severe `percentage of sentences without any errors'.

I therefore propose a different perspective on neural proof nets: instead of dividing the task into a supertagging and a linking task, we divide the task in a graph generation and graph labelling task. The graph generation task is set up in such a way it generates a proof net, step by step, ensuring at each step that the structure is valid. The graph labelling task then assigns labels to the graph vertices, where the labels are the logical connectives and atomic formulas.

The advantage of this setup is that it ensures by construction we obtain a proof net (and therefore a lambda term) for our input sentence, and that errors in the labelling phrase will have a relatively minor impact on downstream tasks. We can also see the inductive proof net construction steps as parser actions and explore multiple solutions (for example, using beam search). This makes it easy for a model to predict the k-best proof nets for a sentence.

We can set up this alternative vision of neural proof nets in such a way that each proof net is generated by a unique set of parser actions. This also provides a way to solve the proof comparison problem for neural proof nets. If each proof net can be uniquely described by the set of actions used to create it, then we can compare two sets of actions and apply some standard statistical measures (such as f-scores) for comparing these two sets.


  1. These correctness conditions take different forms, but are generally expressed as properties of the graph structure. Even though checking correctness is listed as a final step here, actual theorem provers use efficient pruning of branches in the search space which would fail the correctness condition.

References

Bangalore, Srinivas, and Aravind Joshi. 2011. Supertagging: Using Complex Lexical Descriptions in Natural Language Processing. MIT Press.

Girard, Jean-Yves. 1987. Linear Logic. Theor. Comput. Sci. 50: 1-102. doi:10.1016/0304-3975(87)90045-4.

Kogkalidis, Konstantinos, and Michael Moortgat. 2022. Geometry-Aware Supertagging with Heterogeneous Dynamic Convolutions. arXiv Preprint arXiv:2203.12235. doi:10.48550/arXiv.2203.12235.

Kogkalidis, Konstantinos, Michael Moortgat, and Richard Moot. 2020a. AeTHEL: Automatically Extracted Typelogical Derivations for Dutch. In Proceedings of the 12th Language Resources and Evaluation Conference, 5257-66. https://aclanthology.org/2020.lrec-1.647/.

---. 2020b. Neural Proof Nets. In Proceedings of the 24th Conference on Computational Natural Language Learning, 26-40. doi:10.18653/v1/2020.conll-1.3

Montague, Richard. 1974. The Proper Treatment of Quantification in Ordinary English. In Formal Philosophy. Selected Papers of Richard Montague, edited by R. Thomason. New Haven: Yale University Press. doi:10.1007/978-94-010-2506-5_10

Moortgat, Michael. 1997. Categorial Type Logics. In Handbook of Logic and Language, edited by Johan van Benthem and Alice ter Meulen, 93-177. Elsevier/MIT Press. doi:10.1016/B978-044481714-3/50005-9

Moot, Richard. 2015. A Type-Logical Treebank for French. Journal of Language Modelling 3 (1): 229-64. doi:10.15398/jlm.v3i1.92

---. 2021. Type-Logical Investigations: Proof-Theoretic, Computational and Linguistic Aspects of Modern Type-Logical Grammars. Habilitation à Diriger des Recherches, Université de Montpellier.


Ambiguous Definite Pronoun Resolution via Lambek Calculus with Soft Sub-Exponentials and Machine Learning

Mehrnoosh Sadrzadeh (UCL)

Joint work with Hadi Wazni, Ian Lo Kin, Lachlan McPheat (all UCL)

In 1958 Lambek proposed the logic of a residuated monoid to reason about syntax of natural language. This logic was amongst the first substructural logics. It did not have contraction nor weakening, properties that indeed were not necessary for fixed word order languages. Apart from difficulties in generalizing this logic to free word order languages, such as Sanskrit and Latin, they also failed to model discourse structure, i.e. structures that go beyond sentential boundaries, for instance in "John slept. He snored.", where we have an anaphoric reference. Subsequent work of Moortgat, Jaeger, Morrill, and more recently Kanovich et al. showed that this can be resolved by endowing the calculus with certain types of modalities.

In this talk, I will show how endowing Lambek Calculus with soft sub-exponential modalities can model and reason about discourse relations such as coreference. I will present a truncated Fock space semantics for these modalities where projections from Fock spaces give us access to bounded instances of formulae. I will also go through a string diagrammatic calculus for this semantics.

Whenever we have a vector space semantics, we have the advantage of being able to instantiate it in on large corpora of data using machine learning and experiment with the learnt representations on mainstream natural language tasks. Thanks to a translation from vector spaces to quantum circuits due to Coecke et al., we can now also learn these representations on quantum computing devices such as the IBMq range. I will show how a simple extension of this translation to Fock spaces enabled us to obtain quantum circuit semantics for discourse relations. I will then show how we learnt the parameters of these quantum circuits and experimented with them on an ambiguous definite pronoun resolution task. In the quantum case, models with both grammar and discourse achieved the highest training accuracy, in comparison to models that (1) only encoded grammatical structure, (2) only encoded discourse structure, (3) encoded neither of the two. In the classical case, the results were less conclusive, a point that needs further clarification.