{
"cells": [
{
"cell_type": "markdown",
"id": "b1db8c9a",
"metadata": {},
"source": [
"# Sufficient Reasons"
]
},
{
"cell_type": "markdown",
"id": "514a5144",
"metadata": {},
"source": [
"Let $f$ be a Boolean function represented by a decision tree $T$, $x$ be an instance and $p$ be the prediction of $T$ on $x$ ($f(x) = p$), a **sufficient reason** for $x$ is a term of the binary representation of the instance that is a prime implicant of $f$ that covers $x$.\n",
"\n",
"In other words, a **sufficient reason** for an instance $x$ given a class described by a Boolean function $f$ is a subset $t$ of the characteristics of $x$ that is minimal w.r.t. set inclusion and such that any instance $x'$ sharing this set $t$ of characteristics is classified by $f$ as $x$ is."
]
},
{
"cell_type": "markdown",
"id": "91c11ebe",
"metadata": {},
"source": [
"| <Explainer Object>.sufficient_reason(*, n=1, time_limit=None): | \n",
"| :----------- | \n",
"| This method creates a CNF formula associated with the decision tree and solves it to find sufficient reasons. To achieve it, several calls to a SAT solver ([Glucose](https://www.labri.fr/perso/lsimon/research/glucose/)) are performed and the result of each call is a sufficient reason. The method prevents finding the same sufficient reason twice or more by adding clauses (called blocking clauses) between each invocation.
Returns ```n``` sufficient reasons of the current instance in a ```Tuple``` (when ```n``` is set to 1, does not return a ```Tuple``` but just the reason). Supports the excluded features. The reasons are in the form of binary variables, you must use the ```to_features``` method if you want to obtain a representation based on the features represented at start. |\n",
"| n ```Integer``` ```Explainer.ALL```: The desired number of sufficient reasons. Sets this to ```explainer.ALL``` to request all reasons. Default value is 1.|\n",
"| time_limit ```Integer```: The time limit of the method in seconds. Sets this to ```None``` to give this process an infinite amount of time. Default value is ```None```.|\n"
]
},
{
"cell_type": "markdown",
"id": "b326e678",
"metadata": {},
"source": [
"A sufficient reason is minimal w.r.t. set inclusion, i.e. there is no subset of this reason which is also a sufficient reason. A **minimal sufficient reason** for $x$ is a sufficient reason for $x$ that\n",
"contains a minimal number of literals. In other words, a **minimal sufficient reason** has a minimal size. "
]
},
{
"cell_type": "markdown",
"id": "0240672f",
"metadata": {},
"source": [
"| <ExplainerDT Object>.minimal_sufficient_reason(*, n=1, time_limit=None): | \n",
"| :----------- | \n",
"| This method considers a CNF formula representing the decision tree as hard clauses and add binary variables representing the instance as unary soft clauses with weights equal to 1. Several calls to a MAXSAT solver ([OPENWBO](https://github.com/sat-group/open-wbo)) are performed and the result of each call is a minimal sufficient reason. The minimal sufficient reasons are those with the lowest scores (i.e. the sum of weights). Thus, the algorithm stops when a sufficient non-minimal reason is found (i.e. when a higher score is found). Moreover, the method prevents finding the same sufficient reason twice or more by adding clauses (called blocking clauses) between each invocation.
Returns ```n``` minimal sufficient reasons of the current instance in a ```Tuple``` (when ```n``` is set to 1, does not return a ```Tuple``` but just the reason). Supports the excluded features. The reasons are in the form of binary variables, you must use the ```to_features``` method if you want to obtain a representation based on the features considered at start.|\n",
"| n ```Integer``` ```explainer.ALL```: The desired number of sufficient reasons. Set this to ```Explainer.ALL``` to request all reasons. Default value is 1.|\n",
"| time_limit ```Integer```: The time limit of the method in seconds. Sets this to ```None``` to give this process an infinite amount of time. Default value is ```None```.|"
]
},
{
"cell_type": "markdown",
"id": "f38f0ac8",
"metadata": {},
"source": [
"One can also compute prefered sufficient reasons. Indeed, the user may prefer reason containing some features and can provide weights in order to discriminate some features. Please take a look to the [Preferences](/documentation/explainer/preferences/) page for more information."
]
},
{
"cell_type": "markdown",
"id": "8f2b0e57",
"metadata": {},
"source": [
"| <ExplainerRF Object>.prefered_sufficient_reason(*, method, n=1, time_limit=None, weights=None, features_partition=None): | \n",
"| :----------- | \n",
"|This method considers a CNF formula representing the decision tree as hard clauses and add binary variables representing the instance as unary soft clauses with weights equal to different values depending the ```method``` used. If the method is ```PreferredReasonMethod.WEIGHTS``` then weights are given by the parameter ```weights```, otherwise this parameter is useless. If the method is ```PreferredReasonMethod.INCLUSION_PREFERRED``` then the partition of features is given by the parameter features_partition, otherwise this parameter is useless. Several calls to a MAXSAT solver (OPENWBO) are performed and the result of each call is a preferred sufficient reason. The method prevents finding the same reason twice or more by adding clauses (called blocking clauses) between each invocation.
Returns ```n``` preferred majoritary reason of the current instance in a Tuple (when ```n``` is set to 1, does not return a ```Tuple``` but just the reason). Supports the excluded features. The reasons are in the form of binary variables, you must use the ```to_features``` method if you want to obtain a representation based on the features represented at start.|\n",
"| method ```PreferredReasonMethod.WEIGHTS``` ```PreferredReasonMethods.SHAPLEY``` ```PreferredReasonMethod.FEATURE_IMPORTANCE``` ```PreferredReasonMethod.WORD_FREQUENCY```: The method used to discriminate features.\n",
"| time_limit ```Integer``` ```None```: The time limit of the method in seconds. Sets this to ```None``` to give this process an infinite amount of time. Default value is ```None```.|\n",
"| n ```Integer```: The number of majoritary reasons computed. Currently n=1 or n=Exmplainer.ALL is only supported. Default value is 1.|\n",
"| weights ```List```: The weights (list of floats, one per feature, used to discriminate features. Useful when ```method``` is ```PreferredReasonMethod.WEIGHTS```. Default value is ```None```.|\n",
"| features_partition ```List``` of ```List```: The partition of features. The first elements are preferred to the second ones, and so on. Usefull when ```method``` is ```PreferredReasonMethod.INCLUSION_PREFERRED```. Default value is ```None```.|"
]
},
{
"cell_type": "markdown",
"id": "f766d997",
"metadata": {},
"source": [
"The PyXAI library provides a way to check that a reason is sufficient:"
]
},
{
"cell_type": "markdown",
"id": "b149861a",
"metadata": {},
"source": [
"| <Explainer Object>.is_sufficient_reason(reason, *, n_samples=50): | \n",
"| :----------- | \n",
"| This method checks if a reason is sufficient. To do that, it calls first the method ```is_reason``` to check whether ```n_samples``` complete binary representations from this reason (randomly generated) lead to the correct prediction. Secondly, it verifies the minimality of the reason w.r.t. set inclusion. To do that, it deletes a literal of the reason, tests with ```is_reason``` that this new binary representation is not a reason and puts back this literal. The method repeats this operation on every literal of the reason. Because this method is based on a given number of samples and random generation, it is not deterministic (i.e. it is not 100% sure to provide the right answer). It returns ```False``` if it is sure that the input reason is a sufficient one, ```True``` if it is a sufficient reason based on the ```n_samples``` tests and ```None``` if the answer is not sure.|\n",
"| reason ```List``` of ```Integer```: The reason to be checked.|\n",
"| n_samples ```Integer```: The number of samples to be considered, i.e., the number of complete binary representations to be generated randomly from the reason. Default value is 50.|"
]
},
{
"cell_type": "markdown",
"id": "c708202d",
"metadata": {},
"source": [
"Reminder that the literals of a binary representation represent the conditions \"\\ \\ \\ ?\" (such as \"$x_4 \\ge 0.5$ ?\") implied by an instance. A literal $l$ of a binary representation is a **necessary feature** for $x$ if and only if $l$ belongs to every sufficient reason $t$ for $x$. In contrast, a literal $l$ of a binary representation is a **relevant feature** for $x$ if and only if $l$ belongs to at least one sufficient reason $t$ for $x$. PyXAI provides methods to compute them:"
]
},
{
"cell_type": "markdown",
"id": "ce363fc7",
"metadata": {},
"source": [
"| <ExplainerDT Object>.necessary_literals(): | \n",
"| :----------- | \n",
"| Returns a ```List``` containing necessary literals, i.e. literals belonging to every sufficient reason.|"
]
},
{
"cell_type": "markdown",
"id": "efbac738",
"metadata": {},
"source": [
"| <ExplainerDT Object>.relevant_literals(): | \n",
"| :----------- | \n",
"| Returns a ```List``` containing relevant literals, i.e. literals belonging to at least one sufficient reason.|"
]
},
{
"cell_type": "markdown",
"id": "6d4deafe",
"metadata": {},
"source": [
"For a given instance, it can be interesting to compute the number of sufficient reasons or the number of sufficient reasons per literal of the binary representation. PyXAI allows this: "
]
},
{
"cell_type": "markdown",
"id": "8d63439b",
"metadata": {},
"source": [
"| <ExplainerDT Object>.n_sufficient_reasons(*, time_limit=None): | \n",
"| :----------- | \n",
"| Returns the number of sufficient reasons . This method uses the [D4](https://github.com/crillab/d4) compiler to count the models of a CNF formula representing the tree. Supports the excluded features. Returns ```None``` if time_limit was reached.|\n",
"| time_limit ```Integer``` ```None```: The time limit of the method in seconds. Sets this to ```None``` to give this process an infinite amount of time. Default value is ```None```.|"
]
},
{
"cell_type": "markdown",
"id": "abe8c930",
"metadata": {},
"source": [
"| <ExplainerDT Object>.n_sufficient_reasons_per_literal(*, time_limit=None): | \n",
"| :----------- | \n",
"| Returns the number of sufficient reasons per literal in the form of a python dictionary where the keys are the literals and the values the numbers of sufficient reasons. Returns an empty dictionnary if ```time_limit``` is reached. This method uses the [D4](https://github.com/crillab/d4) compiler to count the models of a CNF formula representing the decision tree. Supports the excluded features. The results are in the form of binary variables, you must use the ```to_features``` method if you want to obtain a representatation based on features represented at start.|\n",
"| time_limit ```Integer``` ```None```: The time limit of the method in seconds. Sets this to ```None``` to give this process an infinite amount of time. Default value is ```None```.|"
]
},
{
"cell_type": "markdown",
"id": "273580b5",
"metadata": {},
"source": [
"More information about sufficient reasons and minimal sufficient reasons can be found in the paper [On the Explanatory Power of Decision Trees](https://arxiv.org/abs/2108.05266).\n",
"The basic methods (```initialize```, ```set_instance```, ```to_features```, ```is_reason```, ...) of the ```Explainer``` module used in the next examples are described in the [Explainer Principles](/documentation/explainer/) page."
]
},
{
"cell_type": "markdown",
"id": "ab7f4d56",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "869cba3c",
"metadata": {},
"source": [
"## Example from a Hand-Crafted Tree"
]
},
{
"cell_type": "markdown",
"id": "6a557012",
"metadata": {},
"source": [
"For this example, we take the Decision Tree of the [Building Models](/documentation/learning/builder/DTbuilder/) page consisting of $4$ binary features ($x_1$, $x_2$, $x_3$ and $x_4$). \n",
"\n",
"The following figure shows in red and bold a minimal sufficient reason $(x_1, x_4)$ for the instance $(1,1,1,1)$. \n",
"
\n",
"\n",
"The next figure gives in blue and bold a minimal sufficient reason $(-x_4)$ for the instance $(0,0,0,0)$. \n",
"
\n",
"\n",
" We now show how to get those reasons with PyXAI. We start by building the decision tree: "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "745fbf2c",
"metadata": {},
"outputs": [],
"source": [
"from pyxai import Builder, Explainer\n",
"\n",
"node_x4_1 = Builder.DecisionNode(4, left=0, right=1)\n",
"node_x4_2 = Builder.DecisionNode(4, left=0, right=1)\n",
"node_x4_3 = Builder.DecisionNode(4, left=0, right=1)\n",
"node_x4_4 = Builder.DecisionNode(4, left=0, right=1)\n",
"node_x4_5 = Builder.DecisionNode(4, left=0, right=1)\n",
"\n",
"node_x3_1 = Builder.DecisionNode(3, left=0, right=node_x4_1)\n",
"node_x3_2 = Builder.DecisionNode(3, left=node_x4_2, right=node_x4_3)\n",
"node_x3_3 = Builder.DecisionNode(3, left=node_x4_4, right=node_x4_5)\n",
"\n",
"node_x2_1 = Builder.DecisionNode(2, left=0, right=node_x3_1)\n",
"node_x2_2 = Builder.DecisionNode(2, left=node_x3_2, right=node_x3_3)\n",
"\n",
"node_x1_1 = Builder.DecisionNode(1, left=node_x2_1, right=node_x2_2)\n",
"\n",
"tree = Builder.DecisionTree(4, node_x1_1, force_features_equal_to_binaries=True)"
]
},
{
"cell_type": "markdown",
"id": "bad9b535",
"metadata": {},
"source": [
"And we compute the sufficient reasons for each of these two instances: "
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0f5c98bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"sufficient_reasons: ((1, 4), (2, 3, 4))\n",
"to_features: ('f1 >= 0.5', 'f4 >= 0.5')\n",
"to_features: ('f2 >= 0.5', 'f3 >= 0.5', 'f4 >= 0.5')\n",
"minimal_sufficient_reason: (1, 4)\n",
"-------------------------------\n",
"sufficient_reasons: ((-4,), (-1, -2), (-1, -3))\n",
"to_features: ('f4 < 0.5',)\n",
"to_features: ('f1 < 0.5', 'f2 < 0.5')\n",
"to_features: ('f1 < 0.5', 'f3 < 0.5')\n",
"minimal_sufficient_reasons: (-4,)\n"
]
}
],
"source": [
"explainer = Explainer.initialize(tree)\n",
"explainer.set_instance((1,1,1,1))\n",
"\n",
"sufficient_reasons = explainer.sufficient_reason(n=Explainer.ALL)\n",
"print(\"sufficient_reasons:\", sufficient_reasons)\n",
"assert sufficient_reasons == ((1, 4), (2, 3, 4)), \"The sufficient reasons are not good !\"\n",
"\n",
"for sufficient in sufficient_reasons:\n",
" print(\"to_features:\", explainer.to_features(sufficient)) \n",
" assert explainer.is_sufficient_reason(sufficient), \"This is have to be a sufficient reason !\"\n",
"\n",
"minimals = explainer.minimal_sufficient_reason()\n",
"print(\"minimal_sufficient_reason:\", minimals)\n",
"assert minimals == (1, 4), \"The minimal sufficient reasons are not good !\"\n",
"\n",
"print(\"-------------------------------\")\n",
"\n",
"explainer.set_instance((0,0,0,0))\n",
"\n",
"sufficient_reasons = explainer.sufficient_reason(n=Explainer.ALL)\n",
"print(\"sufficient_reasons:\", sufficient_reasons)\n",
"assert sufficient_reasons == ((-4,), (-1, -2), (-1, -3)), \"The sufficient reasons are not good !\"\n",
"\n",
"for sufficient in sufficient_reasons:\n",
" print(\"to_features:\", explainer.to_features(sufficient))\n",
" assert explainer.is_sufficient_reason(sufficient), \"This is have to be a sufficient reason !\"\n",
"\n",
"minimals = explainer.minimal_sufficient_reason(n=1)\n",
"print(\"minimal_sufficient_reasons:\", minimals)\n",
"assert minimals == (-4,), \"The minimal sufficient reasons are not good !\""
]
},
{
"cell_type": "markdown",
"id": "e0420183",
"metadata": {},
"source": [
"## Example from a Real Dataset"
]
},
{
"cell_type": "markdown",
"id": "03c8f44e",
"metadata": {},
"source": [
"For this example, we take the ```compas.csv``` dataset. We create a model using the hold-out approach (by default, the test size is set to 30%) and select a well-classified instance. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5a1c9c9b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"data:\n",
" Number_of_Priors score_factor Age_Above_FourtyFive \n",
"0 0 0 1 \\\n",
"1 0 0 0 \n",
"2 4 0 0 \n",
"3 0 0 0 \n",
"4 14 1 0 \n",
"... ... ... ... \n",
"6167 0 1 0 \n",
"6168 0 0 0 \n",
"6169 0 0 1 \n",
"6170 3 0 0 \n",
"6171 2 0 0 \n",
"\n",
" Age_Below_TwentyFive African_American Asian Hispanic \n",
"0 0 0 0 0 \\\n",
"1 0 1 0 0 \n",
"2 1 1 0 0 \n",
"3 0 0 0 0 \n",
"4 0 0 0 0 \n",
"... ... ... ... ... \n",
"6167 1 1 0 0 \n",
"6168 1 1 0 0 \n",
"6169 0 0 0 0 \n",
"6170 0 1 0 0 \n",
"6171 1 0 0 1 \n",
"\n",
" Native_American Other Female Misdemeanor Two_yr_Recidivism \n",
"0 0 1 0 0 0 \n",
"1 0 0 0 0 1 \n",
"2 0 0 0 0 1 \n",
"3 0 1 0 1 0 \n",
"4 0 0 0 0 1 \n",
"... ... ... ... ... ... \n",
"6167 0 0 0 0 0 \n",
"6168 0 0 0 0 0 \n",
"6169 0 1 0 0 0 \n",
"6170 0 0 1 1 0 \n",
"6171 0 0 1 0 1 \n",
"\n",
"[6172 rows x 12 columns]\n",
"-------------- Information ---------------\n",
"Dataset name: ../../../dataset/compas.csv\n",
"nFeatures (nAttributes, with the labels): 12\n",
"nInstances (nObservations): 6172\n",
"nLabels: 2\n",
"--------------- Evaluation ---------------\n",
"method: HoldOut\n",
"output: DT\n",
"learner_type: Classification\n",
"learner_options: {'max_depth': None, 'random_state': 0}\n",
"--------- Evaluation Information ---------\n",
"For the evaluation number 0:\n",
"metrics:\n",
" accuracy: 65.33477321814254\n",
"nTraining instances: 4320\n",
"nTest instances: 1852\n",
"\n",
"--------------- Explainer ----------------\n",
"For the evaluation number 0:\n",
"**Decision Tree Model**\n",
"nFeatures: 11\n",
"nNodes: 539\n",
"nVariables: 46\n",
"\n",
"--------------- Instances ----------------\n",
"number of instances selected: 1\n",
"----------------------------------------------\n"
]
}
],
"source": [
"from pyxai import Learning, Explainer\n",
"\n",
"learner = Learning.Scikitlearn(\"../../../dataset/compas.csv\", learner_type=Learning.CLASSIFICATION)\n",
"model = learner.evaluate(method=Learning.HOLD_OUT, output=Learning.DT)\n",
"instance, prediction = learner.get_instances(model, n=1, correct=True)"
]
},
{
"cell_type": "markdown",
"id": "4cacbab0",
"metadata": {},
"source": [
"And we compute a sufficient reason for this instance: "
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b7691f19",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"instance: [0 0 1 0 0 0 0 0 1 0 0]\n",
"prediction: 0\n",
"\n",
"\n",
"sufficient reason: 4\n",
"to features ('Number_of_Priors <= 0.5', 'score_factor <= 0.5', 'Age_Below_TwentyFive <= 0.5')\n",
"is sufficient_reason (for max 50 checks): True\n",
"\n",
"\n",
"minimal: 4\n",
"is sufficient_reason (for max 50 checks): True\n",
"\n",
"\n",
"necessary literals: [-3, -1]\n",
"\n",
"necessary literals features: ('score_factor <= 0.5', 'Age_Below_TwentyFive <= 0.5')\n",
"\n",
"relevant literals: [-4, -7, -2, -9, -2, -14, -4, 5, -11, -4, 5, -13, -2, 5, -15, -2, 5, -8, -2, -11, -15, -2, -12, -17, -2, -7, -17, -2, 6, -17, -2, -11, -17, -4, 6, -9, -12, -4, 6, -12, -13, -2, 5, -12, -18, -2, -8, -17, -19, -4, 6, -9, -11, -13]\n",
"\n",
"n sufficient reasons: 25\n",
"\n",
"sufficient_reasons_per_attribute: {-3: 25, -1: 25, -4: 6, -7: 20, -2: 9, -9: 19, -14: 16, 5: 15, -11: 14, -13: 10, -15: 10, -8: 7, -12: 12, -17: 12, 6: 8, -18: 2, -19: 1}\n",
"\n",
"sufficient_reasons_per_attribute features: OrderedDict([('Number_of_Priors', [{'id': 1, 'name': 'Number_of_Priors', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 6, 'theory': None, 'string': 'Number_of_Priors <= 0.5'}]), ('score_factor', [{'id': 2, 'name': 'score_factor', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 25, 'theory': None, 'string': 'score_factor <= 0.5'}]), ('Age_Above_FourtyFive', [{'id': 3, 'name': 'Age_Above_FourtyFive', 'operator': , 'sign': False, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 15, 'theory': None, 'string': 'Age_Above_FourtyFive > 0.5'}]), ('Age_Below_TwentyFive', [{'id': 4, 'name': 'Age_Below_TwentyFive', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 25, 'theory': None, 'string': 'Age_Below_TwentyFive <= 0.5'}]), ('African_American', [{'id': 5, 'name': 'African_American', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 7, 'theory': None, 'string': 'African_American <= 0.5'}]), ('Asian', [{'id': 6, 'name': 'Asian', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 20, 'theory': None, 'string': 'Asian <= 0.5'}]), ('Hispanic', [{'id': 7, 'name': 'Hispanic', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 12, 'theory': None, 'string': 'Hispanic <= 0.5'}]), ('Other', [{'id': 9, 'name': 'Other', 'operator': , 'sign': False, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 8, 'theory': None, 'string': 'Other > 0.5'}]), ('Female', [{'id': 10, 'name': 'Female', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 19, 'theory': None, 'string': 'Female <= 0.5'}]), ('Misdemeanor', [{'id': 11, 'name': 'Misdemeanor', 'operator': , 'sign': True, 'operator_sign_considered': , 'threshold': 0.5, 'weight': 14, 'theory': None, 'string': 'Misdemeanor <= 0.5'}])])\n"
]
}
],
"source": [
"explainer = Explainer.initialize(model, instance)\n",
"print(\"instance:\", instance)\n",
"print(\"prediction:\", prediction)\n",
"print()\n",
"sufficient_reason = explainer.sufficient_reason(n=1)\n",
"#for s in sufficient_reasons:\n",
"print(\"\\nsufficient reason:\", len(sufficient_reason))\n",
"print(\"to features\", explainer.to_features(sufficient_reason))\n",
"print(\"is sufficient_reason (for max 50 checks): \", explainer.is_sufficient_reason(sufficient_reason, n_samples=50))\n",
"print()\n",
"minimal = explainer.minimal_sufficient_reason()\n",
"print(\"\\nminimal:\", len(minimal))\n",
"print(\"is sufficient_reason (for max 50 checks): \", explainer.is_sufficient_reason(sufficient_reason, n_samples=50))\n",
"print()\n",
"print(\"\\nnecessary literals: \", explainer.necessary_literals())\n",
"print(\"\\nnecessary literals features: \", explainer.to_features(explainer.necessary_literals()))\n",
"print(\"\\nrelevant literals: \", explainer.relevant_literals())\n",
"print()\n",
"print(\"n sufficient reasons:\", explainer.n_sufficient_reasons())\n",
"sufficient_reasons_per_attribute = explainer.n_sufficient_reasons_per_attribute()\n",
"print(\"\\nsufficient_reasons_per_attribute:\", sufficient_reasons_per_attribute)\n",
"print(\"\\nsufficient_reasons_per_attribute features:\", explainer.to_features(sufficient_reasons_per_attribute, details=True))\n"
]
},
{
"cell_type": "markdown",
"id": "14fa4d1d",
"metadata": {},
"source": [
"Other types of explanations are presented in the [Explanations Computation](/documentation/explanations/DTexplanations/) page."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 5
}