World Library  
Flag as Inappropriate
Email this Article

Version space learning

Article Id: WHEBN0007578809
Reproduction Date:

Title: Version space learning  
Author: World Heritage Encyclopedia
Language: English
Subject: Machine learning
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Version space learning

Version space for a "rectangle" hypothesis language in two dimensions. Green pluses are positive examples, and red circles are negative examples. GB is the maximally general positive hypothesis boundary, and SB is the maximally specific positive hypothesis boundary. The intermediate (thin) rectangles represent the hypotheses in the version space.

Version space learning is a logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypotheses, viewed as a set of logical sentences. Formally, the hypothesis space is a disjunction[1]

H_1 \lor H_2 \lor ... \lor H_n

(i.e., either hypothesis 1 is true, or hypothesis 2, or any subset of the hypotheses 1 through n). A version space learning algorithm is presented with examples, which it will use to restrict its hypothesis space; for each example x, the hypotheses that are inconsistent with x are removed from the space.[2] This iterative refining of the hypothesis space is called the candidate elimination algorithm, the hypothesis space maintained inside the algorithm its version space.[1]

Contents

  • The version space algorithm 1
  • Historical background 2
  • See also 3
  • References 4

The version space algorithm

In settings where there is a generality-ordering on hypotheses, it is possible to represent the version space by two sets of hypotheses: (1) the most specific consistent hypotheses, and (2) the most general consistent hypotheses, where "consistent" indicates agreement with observed data.

The most specific hypotheses (i.e., the specific boundary SB) cover the observed positive training examples, and as little of the remaining feature space as possible. These hypotheses, if reduced any further, exclude a positive training example, and hence become inconsistent. These minimal hypotheses essentially constitute a (pessimistic) claim that the true concept is defined just by the positive data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be negative. (I.e., if data has not previously been ruled in, then it's ruled out.)

The most general hypotheses (i.e., the general boundary GB) cover the observed positive training examples, but also cover as much of the remaining feature space without including any negative training examples. These, if enlarged any further, include a negative training example, and hence become inconsistent. These maximal hypotheses essentially constitute a (optimistic) claim that the true concept is defined just by the negative data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be positive. (I.e., if data has not previously been ruled out, then it's ruled in.)

Thus, during learning, the version space (which itself is a set – possibly infinite – containing all consistent hypotheses) can be represented by just its lower and upper bounds (maximally general and maximally specific hypothesis sets), and learning operations can be performed just on these representative sets.

After learning, classification can be performed on unseen examples by testing the hypothesis learned by the algorithm. If the example is consistent with multiple hypotheses, a majority vote rule can be applied.[1]

Historical background

The notion of version spaces was introduced by Mitchell in the early 1980s[2] as a framework for understanding the basic problem of supervised learning within the context of solution search. Although the basic "candidate elimination" search method that accompanies the version space framework is not a popular learning algorithm, there are some practical implementations that have been developed (e.g., Sverdlik & Reynolds 1992, Hong & Tsang 1997, Dubois & Quafafou 2002).

A major drawback of version space learning is its inability to deal with noise: any pair of inconsistent examples can cause the version space to collapse, i.e., become empty, so that classification becomes impossible.[1]

See also

  • Formal concept analysis
  • Inductive logic programming
  • Rough set. [The rough set framework focuses on the case where ambiguity is introduced by an impoverished feature set. That is, the target concept cannot be decisively described because the available feature set fails to disambiguate objects belonging to different categories. The version space framework focuses on the (classical induction) case where the ambiguity is introduced by an impoverished data set. That is, the target concept cannot be decisively described because the available data fails to uniquely pick out a hypothesis. Naturally, both types of ambiguity can occur in the same learning problem.]
  • Rendell (1986)[3] provides an interesting discussion of the general problem of induction.
  • Mill (1843/2002) is the classic source on inductive logic.

References

  1. ^ a b c d  
  2. ^ a b Mitchell, Tom M. (1982). "Generalization as search". Artificial Intelligence 18 (2): 203–226.  
  3. ^ Rendell, Larry (1986). "A general framework for induction and a study of selective induction". Machine Learning 1 (2): 177–226.  
  • Dubois, Vincent; Quafafou, Mohamed (2002). "Concept learning with approximation: Rough version spaces". Rough Sets and Current Trends in Computing: Proceedings of the Third International Conference, RSCTC 2002. Malvern, Pennsylvania. pp. 239–246. 
  • Hong, Tzung-Pai; Shian-Shyong Tsang (1997). "A generalized version space learning algorithm for noisy and uncertain data". IEEE Transactions on Knowledge and Data Engineering 9 (2): 336–340.  
  • Mill, John Stuart (1843/2002). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation. Honolulu, HI: University Press of the Pacific. 
  • Mitchell, Tom M. (1997). Machine Learning. Boston: McGraw-Hill. 
  • Sverdlik, W.; Reynolds, R.G. (1992). "Dynamic version spaces in machine learning". Proceedings, Fourth International Conference on Tools with Artificial Intelligence (TAI '92). Arlington, VA. pp. 308–315. 
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.