Friday, May 6, 2016

Rule learner (or Rule Induction)

Rule learner (or Rule Induction)

It is also known as Separate-And-Conquer method. This method apply an iterative process consisting in first generating a rule that covers a subset of the training examples and then removing all examples covered by the rule from the training set. This process is repeated iteratively until there are no examples left to cover. The final rule set is the collection of the rules discovered at every iteration of the process [13]. Some examples of these kinds of systems are:
  • OneR
OneR or “One Rule” is a simple algorithm proposed by Holt. The OneR builds one rule for each attribute in the training data and then selects the rule with the smallest error rate as its ‘one rule’. To create a rule for an attribute, the most frequent class for each attribute value must be determined. The most frequent class is simply the class that appears most often for that attribute value. A rule is simply a set of attribute values bound to their majority class. OneR selects the rule with the lowest error rate. In the event that two or more rules have the same error rate, the rule is chosen at random.
R.C. Holte (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning. 11:63-91.
  • Ridor
Ridor algorithm is the implementation of a RIpple-DOwn Rule learner proposed by Gaines and Compton. It generates a default rule first and then the exceptions for the default rule with the least (weighted) error rate. Then it generates the “best” exceptions for each exception and iterates until pure. Thus it performs a tree-like expansion of exceptions. The exceptions are a set of rules that predict classes other than the default. IREP is used to generate the exceptions.
Brian R. Gaines, Paul Compton (1995). Induction of Ripple-Down Rules Applied to Modeling Large Databases. J. Intell. Inf. Syst.. 5(3):211-228.
  • PART
PART is a separate-and-conquer rule learner proposed by Eibe and Witten. The algorithm producing sets of rules called ‘decision lists’ which are ordered set of rules. A new data is compared to each rule in the list in turn, and the item is assigned the category of the first matching rule (a default is applied if no rule successfully matches). PART builds a partial C4.5 decision tree in each iteration and makes the “best” leaf into a rule. The algorithm is a combination of C4.5 and RIPPER rule learning.
Eibe Frank, Ian H. Witten: Generating Accurate Rule Sets Without Global Optimization. In: Fifteenth International Conference on Machine Learning, 144-151, 1998.
  • JRip (RIPPER)
JRip implements a propositional rule learner, Repeated Incremental Pruning to Produce Error Reduction (RIPPER), which was proposed by William W. Cohen as an optimized version of IREP. Ripper builds a ruleset by repeatedly adding rules to an empty ruleset until all positive examples are covered. Rules are formed by greedily adding conditions to the antecedent of a rule (starting with empty antecendent) until no negative examples are covered. After a ruleset is constructed, an optimization postpass massages the ruleset so as to reduce its size and improve its fit to the training data. A combination of cross-validation and minimum-description length techniques is used to prevent overfitting.
Cohen, W. W. 1995. Fast effective rule induction. In Machine Learning: Proceedings of the Twelfth International Conference, Lake Tahoe, California.http://citeseer.ist.psu.edu/cohen95fast.html
  • DecisionTable
DecisionTable algorithm builds and using a simple decision table majority classifier as proposed by Kohavi. It summarizes the dataset with a ‘decision table’ which contains the same number of attributes as the original dataset. Then, a new data item is assigned a category by finding the line in the decision table that matches the non-class values of the data item. DecisionTable employs the wrapper method to find a good subset of attributes for inclusion in the table. By eliminating attributes that contribute little or nothing to a model of the dataset, the algorithm reduces the likelihood of over-fitting and creates a smaller and condensed decision table.
Ron Kohavi: The Power of Decision Tables. In: 8th European Conference on Machine Learning, 174-189, 1995.
  • ConjunctiveRule
ConjuctiveRule algorithm implements a single conjunctive rule learner that can predict for numeric and nominal class labels. A rule consists of antecedents “AND”ed together and the consequent (class value) for the classification/regression. In this case, the consequent is the distribution of the available classes (or mean for a numeric value) in the dataset. If the test instance is not covered by this rule, then it’s predicted using the default class distributions/value of the data not covered by the rule in the training data. This learner selects an antecedent by computing the Information Gain of each antecedent and prunes the generated rule using Reduced Error Pruning (REP) or simple pre-pruning based on the number of antecedents. For classification, the Information of one antecedent is the weighted average of the entropies of both the data covered and not covered by the rule.

5 Responses to Rule learner (or Rule Induction)

  1. Salaam,
    very nice blog.
    i found the summaries to b well written😉
  2. Ridz says:
    Thank you. You have a very interesting blog too. I will add it to my list:)
    Ridz
  3. Spyder says:
    Thank You. Excellently Summarized.
  4. Anonymous says:
    hello,
    nice blog brother
    I have a request please
    I need deep information about “JRip” for a presentation
    Can you help me to find good resources ?
  5. Manal says:
    hello,
    nice blog brother
    I have a request please
    I need deep information about “JRip” for a presentation
    Can you help me to find good resources ?

洛仑兹规范6_百度文库

wenku.baidu.com/.../5133dcfbaef8941ea76e0532.htm...
Translate this page
Apr 15, 2011 - 本文通过麦克斯韦方程组引入电磁场规范,指出库伦规范和洛仑兹规范只是众多电磁场规范中的两种较特殊的规范,最后推导出在静态场中库伦规范和 ...

库仑规范_图文_百度文库

wenku.baidu.com/.../cd58693631126edb6f1a1051.ht...
Translate this page
Nov 10, 2011 - 这是讲的比较清楚的一个,且还包含其他规范,推荐 .... 项对应库仑场E ,? ?t 对应着感应库 r 场E 。 感b) 洛仑兹规范(Lorentz gauge) 洛仑兹 

No comments:

Post a Comment