Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization capability, model optimization, and model selection and design. We conclude by proposing several interesting future directions.
本文綜述了元學習在圖像分類、自然語言處理和機器人技術等領域的應用。與深度學習不同,元學習使用較少的樣本數據集,並考慮進一步改進模型泛化以獲得更高的預測精度。我們將元學習模型歸納為三類: 黑箱適應模型、基於相似度的方法模型和元學習過程模型。最近的應用集中在將元學習與貝葉斯深度學習和強化學習相結合,以提供可行的集成問題解決方案。介紹了元學習方法的性能比較,並討論了今後的研究方向。
標題
真實機器學習,264頁pdf,Real-World Machine Learning
關鍵詞
機器學習,人工智能,書籍教材
目錄
作者
Henrik Brink,Joseph W. Richards,Mark Fetherolf
題目:Natural Language Processing Advancements By Deep Learning: A Survey
摘要:自然語言處理(NLP)幫助智能機器更好地理解人類語言,實現基於語言的人機交流。算力的最新發展和語言大數據的出現,增加了使用數據驅動方法自動進行語義分析的需求。由於深度學習方法在計算機視覺、自動語音識別,特別是NLP等領域的應用取得了顯著的進步,數據驅動策略的應用已經非常普遍。本綜述對得益於深度學習的NLP的不同方麵和應用進行了分類和討論。它涵蓋了核心的NLP任務和應用,並描述了深度學習方法和模型如何推進這些領域。我們並進一步分析和比較不同的方法和最先進的模型。
主題:On the information bottleneck theory of deep learning
摘要:深度神經網絡的實際成功並沒有得到令人滿意地解釋其行為的理論進展。在這項工作中,我們研究了深度學習的信息瓶頸理論,它提出了三個具體的主張:第一,深度網絡經曆了兩個不同的階段,分別是初始擬合階段和隨後的壓縮階段;第二,壓縮階段與深網絡良好的泛化性能有著因果關係;第三,壓縮階段是由隨機梯度下降的類擴散行為引起的。在這裏,我們證明這些聲明在一般情況下都不成立,而是反映了在確定性網絡中計算有限互信息度量的假設。當使用簡單的binning進行計算時,我們通過分析結果和模擬的結合證明,在先前工作中觀察到的信息平麵軌跡主要是所采用的神經非線性的函數:當神經激活進入飽和時,雙邊飽和非線性如產生壓縮相但線性激活函數和單邊飽和非線性(如廣泛使用的ReLU)實際上沒有。此外,我們發現壓縮和泛化之間沒有明顯的因果關係:不壓縮的網絡仍然能夠泛化,反之亦然。接下來,我們表明,壓縮階段,當它存在時,不產生從隨機性在訓練中,通過證明我們可以複製IB發現使用全批梯度下降,而不是隨機梯度下降。最後,我們證明當輸入域由任務相關信息和任務無關信息的子集組成時,隱藏表示確實壓縮了任務無關信息,盡管輸入的總體信息可能隨著訓練時間單調增加,並且這種壓縮與擬合過程同時發生而不是在隨後的壓縮期間。
《Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation》K Murray, J Kinnison, T Q. Nguyen, W Scheirer, D Chiang [University of Notre Dame] (2019)
《Deep Learning Based Detection and Correction of Cardiac MR Motion Artefacts During Reconstruction for High-Quality Segmentation》I Oksuz, J R. Clough, B Ruijsink, E P Anton, A Bustin, G Cruz, C Prieto, A P. King, J A. Schnabel [King’s College London] (2019)
Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples.
Deep Learning in Computer Vision: Methods, Interpretation, Causation, and Fairness Deep learning models have succeeded at a variety of human intelligence tasks and are already being used at commercial scale. These models largely rely on standard gradient descent optimization of function parameterized by , which maps an input to an output . The optimization procedure minimizes the loss (difference) between the model output and actual output . As an example, in the cancer detection setting, is an MRI image, and is the presence or absence of cancer. Three key ingredients hint at the reason behind deep learning’s power: (1) deep architectures that are adept at breaking down complex functions into a composition of simpler abstract parts; (2) standard gradient descent methods that can attain local minima on a nonconvex Loss function that are close enough to the global minima; and (3) learning algorithms that can be executed on parallel computing hardware (e.g., graphics processing units), thus making the optimization viable over hundreds of millions of observations . Computer vision tasks, where the input is a high-dimensional image or video, are particularly suited to deep learning application. Recent advances in deep architectures (i.e., inception modules, attention networks, adversarial networks and DeepRL) have opened up completely new applications that were previously unexplored. However, the breakneck progress to replace human tasks with deep learning comes with caveats. These deep models tend to evade interpretation, lack causal relationships between input and output , and may inadvertently mimic not just human actions but also human biases and stereotypes. In this tutorial, we provide an intuitive explanation of deep learning methods in computer vision as well as limitations in practice.
機器學習可解釋性,Interpretability and Explainability in Machine Learning