Soprano  2.7.56
Public Slots | Public Member Functions
Soprano::Inference::InferenceModel Class Reference

The Soprano Inference Model provides a simple forward chaining inference engine which uses the underlying parent model itself to store status information. More...

#include <Soprano/Inference/InferenceModel>

+ Inheritance diagram for Soprano::Inference::InferenceModel:

List of all members.

Public Slots

void performInference ()
void clearInference ()
void setCompressedSourceStatements (bool b)
void setOptimizedQueriesEnabled (bool b)

Public Member Functions

 InferenceModel (Model *parent)
 ~InferenceModel ()
Error::ErrorCode addStatement (const Statement &)
Error::ErrorCode removeStatement (const Statement &)
Error::ErrorCode removeAllStatements (const Statement &)
void addRule (const Rule &)
void setRules (const QList< Rule > &rules)

Detailed Description

The Soprano Inference Model provides a simple forward chaining inference engine which uses the underlying parent model itself to store status information.

The InferenceModel does perfect inference which means that removing of statements is supported and results in a perfect update of the infered statements. There is only one exception: If a model contains two statements in different named graphs that both have the same subject, predicate, and object and trigger one rule then if one of these statements is removed the infered statements are removed, too, although the second statement would still make the infered one valid. This situation gets resolved once the same rule is triggered again by some other added statement or performInference gets called.

The inference is performed based on rules which are stored in Rule instances. Rules can be created manually or parsed using a RuleParser.

The inference engine works roughly as follows:

Whenever a new statement is added it is compared to each rule to check if it could trigger this rule. Then if it could trigger a rule this rule is applied to the whole model.

If a rule produces a new infered statement the following data is created:

Thus, when removing a statement it can easily be checked if this statement had been used to infer another one by querying all named graphs that have this statement as a source statement.

Author:
Sebastian Trueg trueg.nosp@m.@kde.nosp@m..org

Definition at line 71 of file inferencemodel.h.


Constructor & Destructor Documentation


Member Function Documentation

Add a new statement to the model. Inferencing will be done directly. Inferenced statements are stored in additional named graphs.

Reimplemented from Soprano::FilterModel.

Remove one statement from the model.

Reimplemented from Soprano::FilterModel.

Remove statements from the model.

Reimplemented from Soprano::FilterModel.

Add an inference rule to the set of rules. This method will not trigger any inference action. If inference is necessary call performInference() after adding the new rules.

Set the inference rules to be used. This method will not trigger any inference action. If inference is necessary call performInference() after adding the new rules.

Normally inferencing is done once new statements are added to the model or statements are removed. This method performs inferencing on the whole model. It is useful for initializing a model that already contains statements or update the model if it has been modified bypassing this filter model.

Tha latter can easily be done by connecting the Model::statementsAdded and Model::statementsRemoved signals to this slot.

Removes all statements infered by this model. This can be useful if the parent model has been changed without informing the inference model and statements have been removed.

If compressed statements are enabled source statements are stored compressed in one literal value. Otherwise source statements are stored using rdf:subject, rdf:predicate, rdf:object, and sil:context. Non-compressed statements are much cleaner from an ontology design point of view while compressed statements take much less space.

By default comressed source statements are enabled.

This method exists mainly for historical reasons and there normally is no need to call it. Compressed statements should work well for most users.

Parameters:
bIf true compressed source statements are enabled (the default).

If the storage backend supports joined SPARQL queries via UNION it makes sense to enable this.

Parameters:
bIf true InferenceModel will use optimized queries for the inference during addStatement. This will speed up the process quite a lot as matching rules are only applied to the new statement. This flag has no influcence on performInference() though.

The default is to disable the optimized queries since the default soprano redland backend does not support UNION.


The documentation for this class was generated from the following file: