Faculty of Computer Science

Research Group Theoretical Computer Science


Oberseminar: Heterogene formale Methoden


Date: 2024, February 13
Time: 12:00 a. m.
Place: G29-018
Author: Mossakowski, Till
Title: Fuzzy (In)Consistency – Semantics for the Semantic Loss of Logical Neural Networks

Abstract:

Logical neural networks, as well as neural classifiers in general, do not use classical logical truth values false (0) and true (1) only, but use the full spectrum of real values between 0 and 1. We give a brief overview of two possible Interpretations: fuzzy logic of inexact concepts (modeling uncertainty, vagueness, ambiguity and ambivalence), as well as probability (modeling frequentist or subjective likelihood). Since logical neural networks specifiy lower and upper bounds, this results in an open-world notion of fuzziness, and in imprecise probability, respectively.

Logical neural networks employ a semantic loss function which measures the overall inconsistency of the current classification with some background theory. This gradual form of inconsistency is defined as the difference between lower and upper bounds, after some propagation of bounds through sub-formulas. This has a quite proof-theoretic nature. In this talk, we define a model-theoretic notion of gradual consistency and compare it with the proof-theoretic notion employed by logical neural networks. We also show that the usual duality (technically: Galois connection) between theories and model classes still holds.


Back to the Oberseminar web page
Webmaster