[comp.edu] Information on new book

morgan@sri-unix.SRI.COM (Morgan Kaufmann) (02/08/91)

Announcing a new publication from Morgan Kaufmann Publishers, Inc.


COMPUTER SYSTEMS THAT LEARN:
Classification and Prediction Methods from 
Statistics, Neural Nets, Machine Learning, and Expert Systems

by SHOLOM M. WEISS & CASIMIR A. KULIKOWSKI
   (both of Rutgers University)
                                  ISBN 1-55860-065-5

       (For bibliographic purposes, the complete table of contents
       and contact numbers for additional information or for use in
       obtaining copies of this book follow the announcement.)



This is a practical guide to learning systems and their
application.  Learning systems are computer programs that make
decisions without significant human intervention, and may in some
cases exceed the capabilities of humans.

Practical learning systems from statistical pattern recognition,
neural networks, and machine learning are presented.  The authors
examine prominent and successful methods from each area, using an
engineering approach and the practitioner's viewpoint.  Intuitive
explanations with a minimum of mathematics make the material
accessible to anyone - regardless of their experience or special
interests. 

Each method's underlying concepts are discussed: advantages,
disadvantages, sample applications, and fundamental principles for
evaluating performance of a learning system.  Throughout the
authors offer their own extensive experience in building successful
systems by making evaluations, drawing conclusions, and giving
advice about selecting and applying learning systems. 

Sample data is used to contrast learning systems with their rule-
based counterparts from expert systems.  The authors discuss the
potential advantages of combining empirical learning with expert
systems, and their potential success as a complimentary approach
for classification and prediction.



TABLE OF CONTENTS

Preface

Chapter 1  Overview of Learning Systems                                    1
       1.1 What is a Learning System?                                      1
       1.2 Motivation for Building Learning Systems                        2
       1.3 Types of Practical Empirical Learning Systems                   4
              1.3.1 Common Theme:  The Classification Model
              1.3.2 Let the Data Speak                                     10
       1.4 What's New in Learning Methods                                  11
              1.4.1 The Impact of New Technology                           12
       1.5 Outline of the Book                                             14
       1.6 Bibliographical and Historical Remarks                          15

Chapter 2  How to Estimate the True Performance of a 
              Learning System                                              17
       2.1 The Importance of Unbiased Error Rate Estimation
       2.2 What is an Error?                                               18
              2.2.1 Costs and Risks                                        20
       2.3 Apparent Error Rate Estimates                                   23
       2.4 Too Good to Be True:  Overspecialization                        24
       2.5 True Error Rate Estimation                                      26
              2.5.1 The Idealized Model for Unlimited Samples
              2.5.2 Train-and-Test Error Rate Estimation                   27
              2.5.3 Resampling Techniques                                  30
              2.5.4 Finding the Right Complexity Fit                       36
       2.6 Getting the Most Out of the Data
       2.7 Classifier Complexity and Feature Dimensionality
       2.8 What Can Go Wrong?                                              41
              2.8.1 Poor Features, Data Errors, and Mislabeled
                      Classes                                              42
              2.8.2 Unrepresentative Samples                               43
       2.9 How Close to the Truth?                                         44
       2.10 Common Mistakes in the Performance Analysis                    46
       2.11 Bibliographical and Historical Remarks                         48

Chapter 3  Statistical Pattern Recognition                                 51
       3.1 Introduction and Overview                                       51
       3.2 A Few Sample Applications                                       52
       3.3 Bayesian Classifiers                                            54
              3.3.1 Direct Application of the Bayes Rule                   57
       3.4 Linear Discriminants                                            60
              3.4.1 The Normality Assumption and Discriminant
                      Functions                                            62
              3.4.2 Logistic Regression                                    68
       3.5 Nearest Neighbor Methods                                        70
       3.6 Feature Selection                                               72
       3.7 Error Rate Analysis                                             76
       3.8 Bibliographical and Historical Remarks                          78

Chapter 4  Neural Nets                                                     81
       4.1 Introduction and Overview                                       81
       4.2 Perceptrons                                                     82
              4.2.1 Least Mean Square Learning Systems                     87
              4.2.2 How Good is a Linear Separation Network?
       4.3 Multilayer Neural Networks                                      92
              4.3.1 Back-Propagation                                       95
              4.3.2 The Practical Application of 
                      Back-Propagation                                     99
       4.4 Error Rate and Complexity Fit Estimation                        102
       4.5 Improving on Standard Back-Propagation                          108
       4.6 Bibliographical and Historical Remarks                          110

Chapter 5  Machine Learning:  Easily Understood Decision
               Rules                                                       113
       5.1 Introduction and Overview                                       113
       5.2 Decision Trees                                                  116
              5.2.1 Finding the Perfect Tree                               118
              5.2.2 The Incredible Shrinking Tree                          123
              5.2.3 Limitations of Tree Induction Methods                  130
       5.3 Rule Induction                                                  133
              5.3.1 Predictive Value Maximization                          135
       5.4 Bibliographical and Historical Remarks                          141

Chapter 6  Which Technique is Best?                                        145
       6.1 What's Important in Choosing a Classifier                       146
              6.1.1 Prediction Accuracy                                    147
              6.1.2 Speed of Learning and Classification                   165
              6.1.3 Explanation and Insight                                168
       6.2 So, How Do I Choose a Learning System?                          169
       6.3 Variations on the Standard Problem                              172
              6.3.1 Missing Data                                           172
              6.3.2 Incremental Learning                                   173
       6.4 Future Prospects for Improved Learning Methods
       6.5 Bibliographical and Historical Remarks                          175

Chapter 7  Expert Systems                                                  177
       7.1 Introduction and Overview                                       177
              7.1.1 Why Build Expert Systems?  New vs. Old 
                      Knowledge                                            179
       7.2 Estimating Error Rates for Expert Systems                       183
       7.3 Complexity of Knowledge Bases                                   185
              7.3.1 How May Rules Are Too Many?                            185
       7.4 Knowledge Base Example                                          197
       7.5 Empirical Analysis of Knowledge Bases                           198
       7.6 Future:  Combined Learning and Expert Systems                   200
       7.7 Bibliographical and Historical Remarks                          201

References                                                                 205

Author Index                                                               215

Subject Index                                                              219








COMPUTER SYSTEMS THAT LEARN:
Classification and Prediction Methods from 
Statistics, Neural Nets, Machine Learning, and Expert Systems

by SHOLOM M. WEISS & CASIMIR A. KULIKOWSKI

ISBN 1-55860-065-5        $39.95       255 pages
Morgan Kaufmann Publishers, Inc.

_________________________________________________________________


Ordering Information:

       Shipping is available at cost, plus a nominal handling fee:
       In the U.S. and Canada, please add $3.50 for the first book
       and $2.50 for each additional for surface shipping; for
       surface shipments to all other areas, please add $6.50 for the
       first book and $3.50 for each additional book.  Air shipment
       available outside North America for $45.00 on the first book,
       and $25.00 on each additional book.  

       Master Card, Visa and personal checks drawn on US banks
       accepted.

       MORGAN KAUFMANN PUBLISHERS, INC.
       Department   B3
       2929 Campus Drive, Suite 260
       San Mateo, CA 94403
       USA
       
       Phone: (800) 745-7323 (in North America)
              (415) 578-9928
       Fax: (415) 578-0672
       email: morgan@unix.sri.com