Item added to cart
How does a machine learn a new concept on the basis of examples? This second edition takes account of important new developments in the field. It also deals extensively with the theory of learning control systems, now comparably mature to learning of neural networks.
Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as:
How does a machine learn a new concept on the basis of examples?
How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input?
How much training is required to achieve a specified level of accuracy in the prediction?
How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time?
In its successful first edition, A Theory of Learning and Generalization was the first book to treat the problem of machine learning in conjunction with the theory of empirical processes, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as to new results in both topics.
This second edition extends and improves upon this material, covering new areas including:
Support vector machines.
Fat-shattering dimensions and applications to neural network learning.
Learning with dependent samples generated by a beta-mixing process.
Connections between system identification and learning theory.
Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithm.
Reflecting advancements in the field, solutions to some of the open problems posed in the first editionlól
Copyright © 2018 - 2024 ShopSpell