“機器學習是近20多年興起的一門多領域交叉學科,涉及概率論、統計學、逼近論、凸分析、算法複雜度理論等多門學科。機器學習理論主要是設計和分析一些讓可以自動“學習”的算法。機器學習算法是一類從數據中自動分析獲得規律,並利用規律對未知數據進行預測的算法。因為學習算法中涉及了大量的統計學理論,機器學習與統計推斷學聯係尤為密切,也被稱為統計學習理論。算法設計方麵,機器學習理論關注可以實現的,行之有效的學習算法。很多推論問題屬於無程序可循難度,所以部分的機器學習研究是開發容易處理的近似算法。” ——中文維基百科

知識薈萃

機器學習課程 專知搜集

  1. cs229 機器學習 吳恩達
  2. 台大 李宏毅 機器學習
  3. 愛丁堡大學 機器學習與模式識別
  4. Courses on machine learning
  5. CSC2535 -- Spring 2013 Advanced Machine Learning
  6. Stanford CME 323: Distributed Algorithms and Optimization
  7. University at Buffalo CSE574: Machine Learning and Probabilistic Graphical Models Course
  8. Stanford CS229: Machine Learning Autumn 2015
  9. Stanford / Winter 2014-2015 CS229T/STATS231: Statistical Learning Theory
  10. CMU Fall 2015 10-715: Advanced Introduction to Machine Learning
  11. 2015 Machine Learning Summer School: Convex Optimization Short Course
  12. STA 4273H [Winter 2015]: Large Scale Machine Learning
  13. University of Oxford: Machine Learning: 2014-2015
  14. Computer Science 294: Practical Machine Learning [Fall 2009]
  1. Statistics, Probability and Machine Learning Short Course
  2. Statistical Learning
  3. Machine learning courses online
  4. Build Intelligent Applications: Master machine learning fundamentals in five hands-on courses
  5. Machine Learning
  6. Princeton Computer Science 598D: Overcoming Intractability in Machine Learning
  7. Princeton Computer Science 511: Theoretical Machine Learning
  8. MACHINE LEARNING FOR MUSICIANS AND ARTISTS
  9. CMSC 726: Machine Learning
  10. MIT: 9.520: Statistical Learning Theory and Applications, Fall 2015
  11. CMU: Machine Learning: 10-701/15-781, Spring 2011
  12. NLA 2015 course material
  13. CS 189/289A: Introduction to Machine Learning[with videos]
  14. An Introduction to Statistical Machine Learning Spring 2014 [for ACM Class]
  15. CS 159: Advanced Topics in Machine Learning [Spring 2016]
  16. Advanced Statistical Computing [Vanderbilt University]
  17. Stanford CS229: Machine Learning Spring 2016
  18. Machine Learning: 2015-2016
  19. CS273a: Introduction to Machine Learning
  20. Machine Learning CS-433
  21. Machine Learning Introduction: A machine learning course using Python, Jupyter Notebooks, and OpenML
  22. Advanced Introduction to Machine Learning
  23. STA 4273H [Winter 2015]: Large Scale Machine Learning
  24. Statistical Learning Theory and Applications [MIT]
  25. Regularization Methods for Machine Learning
  1. Convex Optimization: Spring 2015
  2. CMU: Probabilistic Graphical Models [10-708, Spring 2014]
  3. Advanced Optimization and Randomized Methods
  4. Machine Learning for Robotics and Computer Vision
  5. Statistical Machine Learning
  6. Probabilistic Graphical Models [10-708, Spring 2016]

數學基礎

Calculus

  1. Khan Academy Calculus [https://www.khanacademy.org/math/calculus-home]

Linear Algebra

  1. Khan Academy Linear Algebra
  2. Linear Algebra MIT 目前最好的線性代數課程

Statistics and probability

  1. edx Introduction to Statistics [https://www.edx.org/course/introduction-statistics-descriptive-uc-berkeleyx-stat2-1x]
  2. edx Probability [https://www.edx.org/course/introduction-statistics-probability-uc-berkeleyx-stat2-2x]
  3. An exploration of Random Processes for Engineers [http://www.ifp.illinois.edu/~hajek/Papers/randomprocDec11.pdf]
  4. Information Theory [http://colah.github.io/posts/2015-09-Visual-Information/]

VIP內容

近年來,隨機矩陣理論(RMT)已經成為學習理論的前沿,作為一種工具來理解它的一些最重要的挑戰。從深度學習模型的泛化到優化算法的精確分析,RMT提供了易於分析的模型。

第一部分:介紹和經典隨機矩陣理論集合

本節介紹兩個經典的隨機矩陣理論集合,高斯正交集合和Wishart矩陣。通過數值實驗,我們將介紹隨機矩陣理論中一些最重要的分布,如半圓和馬爾欽科-帕斯圖,以及一些關鍵的概念,如通用性。 圖片

第2部分:隨機矩陣理論概論:斯蒂爾吉斯和R變換 本節介紹隨機矩陣理論中的一些核心證明技術: Stieltjes和R變換。

第3部分:數值算法分析 本節主要介紹隨機矩陣理論在數值算法分析中的應用。

第4部分:為什麼深度學習有效? 本節討論深度神經網絡泛化的隨機矩陣理論模型。

https://random-matrix-learning.github.io/#presentation1

成為VIP會員查看完整內容
0
28
0

最新內容

Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data is inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of groundbreaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage of research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this paper aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.

0
0
0
下載
預覽

最新論文

Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data is inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of groundbreaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage of research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this paper aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.

0
0
0
下載
預覽
Top