This blog contains the information or news on mining such as exploration, oil well drilling, Gold, Coal, crude oil, mining, gasoline, mining companies, mining exploration, petroleum
Thursday, September 11, 2008
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Review: By Dr. Lee Carlson
The major virtue of this book is the emphasis on practical applications and bread-and-butter techniques for accomplishing tasks that one could expect in a business environment. That is not to say that these techniques could not be used in a scientific research environment. They indeed could be, and in fact may be even easier to implement due to the long time scales that are available in research environments for processing information.
In the business world however data mining has proven to be an activity that gives a substantial competitive edge, and so many businesses are seeking even more sophisticated methods of data mining and Web mining. Data mining could easily be considered to a branch of artificial intelligence (AI), due to its emphasis on learning patterns and performing classification, and the learning and classification tools it uses were discovered by individuals who would describe themselves as being researchers in artificial intelligence. But many, and it is fair to include the authors of this book, do not want to view data mining as part of artificial intelligence, since the latter stirs up discussions on the origin of intelligence, autonomous robots, and conscious machines, to paraphrase a line from chapter 8 of this book. The authors make it a point to emphasize that data mining, or "machine learning" is concerned with the algorithms for the inference of structure from data and the validation of that structure.
Along with its practical emphasis, the book includes discussions of some very interesting developments that are not usually included in books or monographs on data mining. One of these concerns the current research in `programming by demonstration.' This research is targeted towards the "ordinary" computer user who does not possess any programming knowledge but yet wants to automate predictable tasks. The only thing required from the user is knowledge of how to do the task in the usual way. As an example, the authors discuss briefly the `Familiar' system, which extracts information from user applications to make predictions and then generates explanations for the user about its predictions. Even more interesting is that it learns the tasks that are specialized for each individual user. It learns from the unique style of each user and their interaction history. One of the most interesting and powerful claims of programming by demonstration is that is domain-independent, considering the current intense interest in reasoning patterns or algorithms that can process information arising from multiple domains. In this regard a successful system would then be able to learn how to play chess from a user along with perhaps composing music. Again, the ability of a machine to reason in many domains is a step towards what many in the artificial community have called a `universal' learning machine. But the authors do not hold to this view, and in fact they open up the discussion in the chapter on the Weka workbench with a statement to the effect that there is no single learning algorithm that will work with all data mining problems. The "universal learner" they say, is an "idealistic fantasy."
Another interesting discussion included in the book is that of `co-training', which is a methodology that arises in the context of `semi-supervised learning.' In this learning scheme the input contains both unlabeled and labeled data. In co-training, one depends on the fact that the classification task depends on two different and independent perspectives. Then assuming there are a few labeled examples, a different model will be learned for each perspective, and then the models are separately used to label the unlabeled examples. Each model will contribute both negative and positive examples to the pool of labeled examples. The procedure is then repeated until the unlabeled pool is empty.
This allows both models to be trained on the new pool of labeled examples. The authors point out some evidence indicating that if a (naive) Bayesian learner is used throughout this procedure, then it outperforms a learner that develops a single model from the labeled data. The intuition behind this is that using the independence of the two perspectives allows one to reduce the likelihood of an incorrect labeling. References are given for readers that want to investigate this approach in more detail, along with more brief discussions on its generalizations, such as co-EM, which involves probabilistic labeling of unlabeled data in one perspective, and how to use support vector machines in place of the naive Bayesian learner.
For the practitioner, the most useful discussion in the book concerns the evaluation of the different methods for data mining. What makes one approach to data mining better than another, and is there then a ranking of the different approaches? Can one in fact make judgments on the reliability or performance of data mining algorithms using solely the training or test data? If one had a general methodology for ranking data mining algorithms according to their performance then this would be a major advance, since this would allow a classification scheme for machine learning where one could speak of one machine being `more intelligent' than another. Unfortunately however this is difficult, and even said to be impossible according to some researchers. There are results in the research literature, going by the name of `free lunch' theorems, which seem to indicate that one cannot distinguish machine learning algorithms based solely on the way the deal with training or test data. The authors do not discuss these results in this book, but it is certainly apparent that they are aware of the difficult issues involved in the prediction of performance for data mining algorithms.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment