International Conference on Machine Learning / Federated AI Meeting 2018

Die International Conference on Machine Learning (ICML) ist die führende Fachkonferenz im Bereich des maschinellen Lernens. Die internationale Konferenz wurde erstmals im Jahre 1980 veranstaltet und bietet seitdem ein Forum für richtungsweisende Forschungsergebnisse auf den Gebieten Artificial Intelligence und Machine Learning.

Die ICML 2018

Im Jahr 2018 findet die 35. ICML erstmals in der schwedischen Hauptstadt Stockholm statt. Der erste Tag der Veranstaltung beleuchtet in mehreren Tutorials unterschiedliche Teilbereiche des Machine Learning. Hierzu zählen u.a. Deep Learning, Imitation Learning und Variational Bayesian Methods. Die folgenden drei Tage bieten insgesamt 625 Fachvorträge mit aktuellen Forschungsergebnissen führender Wissenschaftler im Bereich KI und Machine Learning. Die beiden Finaltage bestehen aus 67 individuellen Workshops des Federated AI Meetings, die das Konferenzthema in ein- bis zweitägigen Einheiten aus verschiedenen Blickwinkeln betrachten.

Federated AI Meeting

Das erstmals stattfindende Federated AI Meeting (FAIM) ist ein Zusammenschluss weltweit führender AI Konferenzen. Es vereint über 5000 bedeutende Wissenschaftler und KI-Experten an einem einzigen Ort und ist das größte wissenschaftliche Event über künstlichen Intelligenz. Zusammen mit der ICML tauschen sich hier Wissenschaftler der International Joint Conference on Artificial Intelligence (IJCAI-ECAI), der International Conference on Autonomous Agents and Multiagent Systems (AAMAS), der International Conference on Case-Based Reasoning (ICCBR) und des Annual Symposium on Combinatorial Search (SCoS) aus.

Unsere Interessenschwerpunkte

Wir sind bei folgenden Veranstaltungen, Vorträgen und Workshops vor Ort und freuen uns über den fachlichen Austausch:

Tutorials

  • Imitation Learning
  • Understanding your Neighbors
  • Variational Bayes and Beyond

Hauptkonferenz

Mittwoch
Session 1: Reinforcement Learning
  • Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs, Paper
  • Learning with Abandonment, Paper
  • Lipschitz Continuity in Model-based Reinforcement Learning, Paper
  • Implicit Quantile Networks for Distributional Reinforcement Learning, Paper
  • More Robust Doubly Robust Off-policy Evaluation, Paper
Session 2a: Optimization (Bayesian)
  • Stagewise Safe Bayesian Optimization with Gaussian Processes, Paper
  • BOCK : Bayesian Optimization with Cylindrical Kernels, Paper
  • BOHB: Robust and Efficient Hyperparameter Optimization at Scale, Paper
  • Bayesian Optimization of Combinatorial Structures, Paper
Session 2b: Active Learning
  • Design of Experiments for Model Discrimination Hybridising Analytical and Data-Driven Approaches, Paper
  • Selecting Representative Examples for Program Synthesis, Paper
  • On the Relationship between Data Efficiency and Error for Uncertainty Sampling, Paper
Session 3a: Reinforcement Learning
  • Programmatically Interpretable Reinforcement Learning, Paper
  • Learning by Playing – Solving Sparse Reward Tasks from Scratch, Paper
  • Automatic Goal Generation for Reinforcement Learning Agents, Paper
  • Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control, Paper
Session 3b: Reinforcement Learning
  • Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning, Paper
  • Path Consistency Learning in Tsallis Entropy Regularized MDPs, Paper
  • Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems, Paper
  • Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator, Paper
Donnerstag
Session 1: Reinforcement Learning
  • Convergent Tree Backup and Retrace with Function Approximation, Paper
  • SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation, Paper
  • Scalable Bilinear Pi Learning Using State and Action Features, Paper
  • Stochastic Variance-Reduced Policy Gradient, Paper
Session 2a: Deep Learning (Adversarial)
  • Composite Functional Gradient Learning of Generative Adversarial Models, Paper
  • Tempered Adversarial Networks, Paper
  • Improved Training of Generative Adversarial Networks Using Representative Features, Paper
  • A Two-Step Computation of the Exact GAN Wasserstein Distance, Paper
  • Is Generator Conditioning Causally Related to GAN Performance?, Paper
Session 2b: Online Learning
  • Feasible Arm Identification, Paper
  • Bandits with Delayed, Aggregated Anonymous Feedback, Paper
  • Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits, Paper
  • Thompson Sampling for Combinatorial Semi-Bandits, Paper
Session 3a: Online Learning
  • Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity, Paper
  • Practical Contextual Bandits with Regression Oracles, Paper
  • Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate, Paper
  • Stochastic Proximal Algorithms for AUC Maximization, Paper
Session 3b: Online Learning
  • Let's be Honest: An Optimal No-Regret Framework for Zero-Sum Games, Paper
  • Weakly Consistent Optimal Pricing Algorithms in Repeated Posted-Price Auctions with Strategic Buyer, Paper
  • Self-Bounded Prediction Suffix Tree via Approximate String Matching, Paper
  • Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection, Paper
  • Learning Localized Spatio-Temporal Models From Streaming Data, Paper
Freitag
Session 1: Online Learning
  • Dynamic Regret of Strongly Adaptive Methods, Paper
  • Online Learning with Abstention, Paper
  • Multi-Fidelity Black-Box Optimization with Hierarchical Partitions, Paper
  • Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits, Paper
  • Firing Bandits: Optimizing Crowdfunding, Paper
Session 2: Online Learning
  • Online Linear Quadratic Control, Paper
  • Semiparametric Contextual Bandits, Paper
  • Minimax Concave Penalized Multi-Armed Bandit Model with High-Dimensional Covariates, Paper
  • Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors, Paper
Session 3a: Reinforcement Learning
  • Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator, Paper
  • Policy Optimization as Wasserstein Gradient Flows, Paper
  • Clipped Action Policy Gradient, Paper
  • Fourier Policy Gradients, Paper
  • Self-Imitation Learning, Paper
Session 3b: Reinforcement Learning
  • Mean Field Multi-Agent Reinforcement Learning, Paper
  • Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control, Paper
  • Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents, Paper
  • The Uncertainty Bellman Equation and Exploration, Paper

Workshops

  • Automatic Machine Learning (AutoML), Webseite
  • Exploration in Reinforcement Learning (ERL), Webseite

Weiterlesen:


Montag bis Freitag von 9 bis 22 Uhr stehen wir Ihnen persönlich und diskret zur Verfügung.
Rufen Sie uns an oder senden Sie uns eine E-Mail, unter .
Wir freuen uns auf Ihre Kontaktaufnahme.