您好,欢迎光临有路网!
神经网络与机器学习(英文版 第3版)
QQ咨询:
有路璐璐:

神经网络与机器学习(英文版 第3版)

  • 作者:(加)海金(Haykin S)
  • 出版社:机械工业出版社
  • ISBN:9787111265283
  • 出版日期:2009年03月01日
  • 页数:906
  • 定价:¥69.00
  • 分享领佣金
    手机购买
    城市
    店铺名称
    店主联系方式
    店铺售价
    库存
    店铺得分/总交易量
    发布时间
    操作

    新书比价

    网站名称
    书名
    售价
    优惠
    操作

    图书详情

    内容提要
    神经网络是计算智能和机器学习的重要分支,在诸多领域都取得了很大的成功。在众多神经网络著作中,影响*为广泛的是Simon Haykin的《神经网络原理》(第4版更名为《神经网络与机器学习》)。在本书中,作者结合近年来神经网络和机器学习的*新进展,从理论和实际应用出发,全面。系统地介绍了神经网络的基本模型、方法和技术,并将神经网络和机器学习有机地结合在一起。
    本书不但注重对数学分析方法和理论的探讨,而且也非常关注神经网络在模式识别、信号处理以及控制系统等实际工程问题中的应用。本书的可读性非常强,作者举重若轻地对神经网络的基本模型和主要学习理论进行了深入探讨和分析,通过大量的试验报告、例题和习题来帮助读者更好地学习神经网络。
    本版在前一版的基础上进行了广泛修订,提供了神经网络和机器学习这两个越来越重要的学科的*新分析。
    本书特色
    基于随机梯度下降的在线学习算法;小规模和大规模学习问题。
    核方法,包括支持向量机和表达定理。
    信息论学习模型,包括连接、独立分量分析(ICA),一致独立分量分析和信息瓶颈。
    随机动态规划,包括逼近和神经动态规划。
    逐次状态估计算法,包
    目录
    Preface
    Acknowledgements
    Abbreviations and Symbols
    GLOSSARY
    Introduction
    1 Whatis aNeuralNetwork?
    2 The Human Brain
    3 Models of a Neuron
    4 Neural Networks Viewed As Dirccted Graphs
    5 Feedback
    6 Network Architecturns
    7 Knowledge Representation
    8 Learning Processes
    9 Learninglbks
    10 Concluding Remarks
    Notes and Rcferences
    Chapter 1 Rosenblatt's Perceptrou
    1.1 Introduction
    1.2 Perceptron
    1.3 1he Pcrceptron Convergence Theorem
    1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
    1.5 Computer Experiment:Pattern Classification
    1.6 The Batch Perceptron Algorithm
    1.7 Summary and Discussion
    Notes and Refercnces
    Problems
    Chapter 2 Model Building through Regression
    2.1 Introduction 68
    2.2 Linear Regression Model:Preliminary Considerafions
    2.3 Maximum a Posteriori Estimation ofthe ParameterVector
    2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation
    2.5 Computer Experiment:Pattern Classification
    2.6 The Minimum.Description-Length Principle
    2.7 Rnite Sample—Size Considerations
    2.8 The Instrumental,variables Method
    2 9 Summary and Discussion
    Notes and References
    Problems
    Chapter 3 The Least—Mean-Square Algorithm
    3.1 Introduction
    3.2 Filtering Structure of the LMS Algorithm
    3.3 Unconstrained optimization:a Review
    3.4 ThC Wiener FiIter
    3.5 ne Least.Mean.Square Algorithm
    3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter
    3.7 The Langevin Equation:Characterization ofBrownian Motion
    3.8 Kushner’S Direct.Averaging Method
    3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter
    3.10 Computer Experiment I:Linear PTediction
    3.11 Computer Experiment II:Pattern Classification
    3.12 Virtucs and Limitations of the LMS AIgorithm
    3.13 Learning.Rate Annealing Schedules
    3.14 Summary and Discussion
    Notes and Refefences
    Problems
    Chapter 4 Multilayer Pereeptrons
    4.1 IntroductlOn
    4.2 Some Preliminaries
    4.3 Batch Learning and on.Line Learning
    4.4 The Back.Propagation Algorithm
    4 5 XORProblem
    4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better
    4.7 Computer Experiment:Pattern Classification
    4.8 Back Propagation and Differentiation
    4.9 The Hessian and lIs Role 1n On-Line Learning
    4.10 Optimal Annealing and Adaptive Control of the Learning Rate
    4.11 Generalization
    4.12 Approximations of Functions
    4.13 Cross.Vjlidation
    4.14 Complexity Regularization and Network Pruning
    4.15 Virtues and Limitations of Back-Propagation Learning
    4.16 Supervised Learning Viewed as an Optimization Problem
    4.17 COUVOlutionaI Networks
    4.18 Nonlinear Filtering
    4.19 Small—Seale VerSus Large+Scale Learning Problems
    4.20 Summary and Discussion
    Notes and RCfcreilces
    Problems
    Chapter 5 Kernel Methods and Radial-Basis Function Networks
    5.1 Intreduction
    5.2 Cover’S Theorem on the Separability of Patterns
    5.3 1he Interpolation Problem
    5 4 Radial—Basis—Function Networks
    5.5 K.Mcans Clustering
    5.6 Recursive Least-Squares Estimation of the Weight Vector
    5 7 Hybrid Learning Procedure for RBF Networks
    5 8 Computer Experiment:Pattern Classification
    5.9 Interpretations of the Gaussian Hidden Units
    5.10 Kernel Regression and Its Relation to RBF Networks
    5.11 Summary and Discussion
    Notes and References
    Problems
    Chapter 6 Support Vector Machines
    Chapter 7 Regularization Theory
    Chapter 8 Prindpal-Components Aaalysis
    Chapter 9 Self-Organizing Maps
    Chapter 10 Information-Theoretic Learning Models
    Chapter 11 Stochastic Methods Rooted in Statistical Mechanics
    Chapter 12 Dynamic Programming
    Chapter 13 Neurodynamics
    Chapter 14 Bayseian Filtering for State Estimation ofDynamic Systems
    Chaptel 15 Dynamlcaay Driven Recarrent Networks
    Bibliography
    Index

    与描述相符

    100

    北京 天津 河北 山西 内蒙古 辽宁 吉林 黑龙江 上海 江苏 浙江 安徽 福建 江西 山东 河南 湖北 湖南 广东 广西 海南 重庆 四川 贵州 云南 西藏 陕西 甘肃 青海 宁夏 新疆 台湾 香港 澳门 海外