I don't think these two things are mutually exclusive.
As far as I'm aware there is work underway to take logical constructions and integrate them with probablistic machine learning to do things like force zero probabilities in impossible input cases. That is encoding domain knowledge into the model directly in the form of symbolic reasoning.
I mean even Bayesian nets require some encoding of causality right? Maybe I'm reading to much of "blah symbolic reasoning is worthless" in your comment?
> We propose the Probabilistic Sentential Decision Diagram (PSDD): A complete and canonical representation of probability distributions defined over the models of a given propositional theory. Each parameter of a PSDD can be viewed as the (conditional) probability of making a decision in a corresponding Sentential Decision Diagram (SDD). The SDD itself is a recently proposed complete and canonical representation of propositional theories. We explore a number of interesting properties of PSDDs, including the independencies that underlie them. We show that the PSDD is a tractable representation. We further show how the parameters of a PSDD can be efficiently estimated, in closed form, from complete data. We empirically evaluate the quality of PSDDs learned from data, when we have knowledge, a priori, of the domain logical constraints.
Still working on my understanding but Professor Darwiche gave a lecture on the material in one of my classes. Salient bit:
> The problem we tackle here is that of developing a representation of probability distributions in the presence of massive, logical constraints. That is, given a propositional logic theory which represents domain constraints, our goal is to develop a representation that induces a unique probability distribution over the models of the given theory.
As far as I'm aware there is work underway to take logical constructions and integrate them with probablistic machine learning to do things like force zero probabilities in impossible input cases. That is encoding domain knowledge into the model directly in the form of symbolic reasoning.
I mean even Bayesian nets require some encoding of causality right? Maybe I'm reading to much of "blah symbolic reasoning is worthless" in your comment?