default search action
Journal of Machine Learning Research, Volume 18
Volume 18, 2017
- Katsuhiko Ishiguro, Issei Sato, Naonori Ueda:
Averaged Collapsed Variational Bayes Inference. 1:1-1:29 - Nan Du, Yingyu Liang, Maria-Florina Balcan, Manuel Gomez-Rodriguez, Hongyuan Zha, Le Song:
Scalable Influence Maximization for Multiple Products in Continuous-Time Diffusion Networks. 2:1-2:45 - Pranjal Awasthi, Maria-Florina Balcan, Konstantin Voevodski:
Local algorithms for interactive clustering. 3:1-3:35 - David Hallac, Christopher Wong, Steven Diamond, Abhijit Sharang, Rok Sosic, Stephen P. Boyd, Jure Leskovec:
SnapVX: A Network-Based Convex Optimization Solver. 4:1-4:5 - Jason D. Lee, Qiang Liu, Yuekai Sun, Jonathan E. Taylor:
Communication-efficient Sparse Regression. 5:1-5:30 - Jack Raymond, Federico Ricci-Tersenghi:
Improving Variational Methods via Pairwise Linear Response Identities. 6:1-6:36 - Adam S. Charles, Dong Yin, Christopher J. Rozell:
Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks. 7:1-7:37 - Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick D. Shipman, Sofya Chepushtanova, Eric M. Hanson, Francis C. Motta, Lori Ziegelmeier:
Persistence Images: A Stable Vector Representation of Persistent Homology. 8:1-8:35 - Ery Arias-Castro, Gilad Lerman, Teng Zhang:
Spectral Clustering Based on Local PCA. 9:1-9:57 - Yves F. Atchadé, Gersende Fort, Eric Moulines:
On Perturbed Proximal Gradient Algorithms. 10:1-10:33 - Christos Dimitrakakis, Blaine Nelson, Zuhe Zhang, Aikaterini Mitrokotsa, Benjamin I. P. Rubinstein:
Differential Privacy for Bayesian Inference through Posterior Sampling. 11:1-11:39 - Dae Il Kim, Benjamin F. Swanson, Michael C. Hughes, Erik B. Sudderth:
Refinery: An Open Source Topic Modeling Web Platform. 12:1-12:5 - Herbert Jaeger:
Using Conceptors to Manage Neural Long-Term Memories for Temporal Patterns. 13:1-13:43 - Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M. Blei:
Automatic Differentiation Variational Inference. 14:1-14:45 - Jacques Wainer, Gavin C. Cawley:
Empirical Evaluation of Resampling Procedures for Optimising SVM Hyperparameters. 15:1-15:35 - Naoki Ito, Akiko Takeda, Kim-Chuan Toh:
A Unified Formulation and Fast Accelerated Proximal Gradient Method for Classification. 16:1-16:49 - Guillaume Lemaitre, Fernando Nogueira, Christos K. Aridas:
Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. 17:1-17:5 - Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen:
Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles. 18:1-18:65 - Francis R. Bach:
Breaking the Curse of Dimensionality with Convex Neural Networks. 19:1-19:53 - Si Si, Cho-Jui Hsieh, Inderjit S. Dhillon:
Memory Efficient Kernel Approximation. 20:1-20:32 - Francis R. Bach:
On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions. 21:1-21:38 - Animashree Anandkumar, Rong Ge, Majid Janzamin:
Analyzing Tensor Power Method Dynamics in Overcomplete Regime. 22:1-22:40 - Edward Raff:
JSAT: Java Statistical Analysis Tool, a Library for Machine Learning. 23:1-23:5 - Daniel Nevo, Yaacov Ritov:
Identifying a Minimal Class of Models for High-dimensional Data. 24:1-24:29 - Lars Kotthoff, Chris Thornton, Holger H. Hoos, Frank Hutter, Kevin Leyton-Brown:
Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA. 25:1-25:5 - Maxim Egorov, Zachary N. Sunberg, Edward Balaban, Tim Allan Wheeler, Jayesh K. Gupta, Mykel J. Kochenderfer:
POMDPs.jl: A Framework for Sequential Decision Making under Uncertainty. 26:1-26:5 - François Caron, Willie Neiswanger, Frank D. Wood, Arnaud Doucet, Manuel Davy:
Generalized Pólya Urn for Time-Varying Pitman-Yor Processes. 27:1-27:32 - Alexandre Bouchard-Côté, Arnaud Doucet, Andrew Roth:
Particle Gibbs Split-Merge Sampling for Bayesian Inference in Mixture Models. 28:1-28:39 - Dimitris Bertsimas, Martin S. Copenhaver, Rahul Mazumder:
Certifiably Optimal Low Rank Factor Analysis. 29:1-29:53 - Yaohua Hu, Chong Li, Kaiwen Meng, Jing Qin, Xiaoqi Yang:
Group Sparse Optimization via lp, q Regularization. 30:1-30:52 - Ziyuan Gao, Christoph Ries, Hans Ulrich Simon, Sandra Zilles:
Preference-based Teaching. 31:1-31:32 - Daniel J. McDonald, Cosma Rohilla Shalizi, Mark J. Schervish:
Nonparametric Risk Bounds for Time-Series Forecasting. 32:1-32:40 - Tianlin Shi, Jun Zhu:
Online Bayesian Passive-Aggressive Learning. 33:1-33:39 - Jamshid Sourati, Murat Akçakaya, Todd K. Leen, Deniz Erdogmus, Jennifer G. Dy:
Asymptotic Analysis of Objectives Based on Fisher Information in Active Learning. 34:1-34:41 - Igor Melnyk, Arindam Banerjee:
A Spectral Algorithm for Inference in Hidden semi-Markov Models. 35:1-35:39 - Santtu Tikka, Juha Karvanen:
Simplifying Probabilistic Expressions in Causal Inference. 36:1-36:30 - Lee-Ad Gottlieb, Aryeh Kontorovich, Pinhas Nisnevitch:
Nearly optimal classification for semimetrics. 37:1-37:22 - Elena Popovici:
Bridging Supervised Learning and Test-Based Co-optimization. 38:1-38:39 - Eemeli Leppäaho, Muhammad Ammad-ud-din, Samuel Kaski:
GFA: Exploratory Analysis of Multiple Data Sources with Group Factor Analysis. 39:1-39:5 - Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, James Hensman:
GPflow: A Gaussian Process Library using TensorFlow. 40:1-40:6 - Mehrdad Farajtabar, Yichen Wang, Manuel Gomez-Rodriguez, Shuang Li, Hongyuan Zha, Le Song:
COEVOLVE: A Joint Point Process Model for Information Diffusion and Network Evolution. 41:1-41:49 - Guo Yu, Jacob Bien:
Learning Local Dependence In Ordered Data. 42:1-42:60 - Daniele Durante, Nabanita Mukherjee, Rebecca C. Steorts:
Bayesian Learning of Dynamic Multilayer Networks. 43:1-43:29 - Samory Kpotufe, Nakul Verma:
Time-Accuracy Tradeoffs in Kernel Prediction: Controlling Prediction Quality. 44:1-44:29 - Hanwen Huang:
Asymptotic behavior of Support Vector Machine for spiked population model. 45:1-45:21 - Xiangyu Chang, Shaobo Lin, Ding-Xuan Zhou:
Distributed Semi-supervised Learning with Kernel Ridge Regression. 46:1-46:22 - Rémi Bardenet, Arnaud Doucet, Christopher C. Holmes:
On Markov chain Monte Carlo methods for tall data. 47:1-47:43 - Abraham J. Wyner, Matthew Olson, Justin Bleich, David Mease:
Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers. 48:1-48:33 - Shiau Hong Lim, Yudong Chen, Huan Xu:
Clustering from General Pairwise Observations with Applications to Time-varying Graphs. 49:1-49:47 - Debarghya Ghoshdastidar, Ambedkar Dukkipati:
Uniform Hypergraph Partitioning: Provable Tensor Methods and Sampling Techniques. 50:1-50:41 - Yohann de Castro, Thibault Espinasse, Paul Rochet:
Reconstructing Undirected Graphs from Eigenspaces. 51:1-51:24 - Ohad Shamir:
An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback. 52:1-52:11 - Adel Javanmard:
Perishability of Data: Dynamic Pricing under Varying-Coefficient Models. 53:1-53:31 - Mehmet Eren Ahsen, Niharika Challapalli, Mathukumalli Vidyasagar:
Two New Approaches to Compressed Sensing Exhibiting Both Robust Sparse Recovery and the Grouping Effect. 54:1-54:24 - Fabian Pedregosa, Francis R. Bach, Alexandre Gramfort:
On the Consistency of Ordinal Regression Methods. 55:1-55:35 - Takashi Takenouchi, Takafumi Kanamori:
Statistical Inference with Unnormalized Discrete Models and Localized Homogeneous Divergences. 56:1-56:26 - Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Aapo Hyvärinen, Revant Kumar:
Density Estimation in Infinite Dimensional Exponential Families. 57:1-57:59 - Matthäus Kleindessner, Ulrike von Luxburg:
Lens Depth Function and k-Relative Neighborhood Graph: Versatile Tools for Ordinal Data Analysis. 58:1-58:52 - Deepayan Chakrabarti, Stanislav Funiak, Jonathan Chang, Sofus A. Macskassy:
Joint Label Inference in Networks. 59:1-59:39 - Chao Gao, Zongming Ma, Anderson Y. Zhang, Harrison H. Zhou:
Achieving Optimal Misclassification Proportion in Stochastic Block Models. 60:1-60:45 - Prakash Balachandran, Eric D. Kolaczyk, Weston D. Viles:
On the Propagation of Low-Rate Measurement Error to Subgraph Counts in Large Networks. 61:1-61:33 - Yannis Papanikolaou, James R. Foulds, Timothy N. Rubin, Grigorios Tsoumakas:
Dense Distributions from Sparse Samples: Improved Gibbs Sampling Parameter Estimators for LDA. 62:1-62:58 - Morteza Ashraphijuo, Xiaodong Wang:
Fundamental Conditions for Low-CP-Rank Tensor Completion. 63:1-63:29 - An C. Tran, Jens Dietrich, Hans W. Guesgen, Stephen Marsland:
Parallel Symmetric Class Expression Learning. 64:1-64:34 - Jervis Pinto, Alan Fern:
Learning Partial Policies to Speedup MDP Tree Search via Reduction to I.I.D. Learning. 65:1-65:35 - Jie Chen, Haim Avron, Vikas Sindhwani:
Hierarchically Compositional Kernels for Scalable Nonparametric Learning. 66:1-66:42 - Benjamin Stucky, Sara A. van de Geer:
Sharp Oracle Inequalities for Square Root Regularization. 67:1-67:29 - Matthew Norton, Alexander Mafusalov, Stan Uryasev:
Soft Margin Support Vector Classification as Buffered Probability Minimization. 68:1-68:43 - Ardavan Saeedi, Tejas D. Kulkarni, Vikash K. Mansinghka, Samuel J. Gershman:
Variational Particle Approximations. 69:1-69:29 - Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, Perry MacNeille:
A Bayesian Framework for Learning Rule Sets for Interpretable Classification. 70:1-70:37 - A. Adam Ding, Jennifer G. Dy, Yi Li, Yale Chang:
A Robust-Equitable Measure for Feature Ranking and Selection. 71:1-71:46 - Samuel Gerber, Mauro Maggioni:
Multiscale Strategies for Computing Optimal Transport. 72:1-72:32 - Herke van Hoof, Gerhard Neumann, Jan Peters:
Non-parametric Policy Search with Limited Information Loss. 73:1-73:46 - Martin Bilodeau, Aurélien Guetsop Nangue:
Tests of Mutual or Serial Independence of Random Vectors with Applications. 74:1-74:40 - Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail:
Recovering PCA and Sparse PCA via Hybrid-(l1, l2) Sparse Sampling of Data Elements. 75:1-75:34 - Austin J. Brockmeier, Tingting Mu, Sophia Ananiadou, John Yannis Goulermas:
Quantifying the Informativeness of Similarity Measurements. 76:1-76:61 - David Martínez Martínez, Guillem Alenyà, Tony Ribeiro, Katsumi Inoue, Carme Torras:
Relational Reinforcement Learning for Planning with Exogenous Effects. 78:1-78:44 - Rajarshi Guhaniyogi, Shaan Qamar, David B. Dunson:
Bayesian Tensor Regression. 79:1-79:31 - Nicolas Flammarion, Balamurugan Palaniappan, Francis R. Bach:
Robust Discriminative Clustering with Sparse Regularizers. 80:1-80:50 - Weiwei Liu, Ivor W. Tsang:
Making Decision Trees Feasible in Ultrahigh Feature and Label Dimensions. 81:1-81:36 - Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, Eric P. Xing:
Learning Scalable Deep Kernels with Recurrent Structure. 82:1-82:37 - Vardan Papyan, Yaniv Romano, Michael Elad:
Convolutional Neural Networks Analyzed via Convolutional Sparse Coding. 83:1-83:52 - Yuchen Zhang, Lin Xiao:
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization. 84:1-84:42 - Hui Sun, Bruce A. Craig, Lingsong Zhang:
Angle-based Multicategory Distance-weighted SVM. 85:1-85:21 - Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Krikamol Muandet:
Minimax Estimation of Kernel Mean Embeddings. 86:1-86:47 - Alexander J. Gates, Yong-Yeol Ahn:
The Impact of Random Models on Clustering Similarity. 87:1-87:28 - Aurko Roy, Sebastian Pokutta:
Hierarchical Clustering via Spreading Metrics. 88:1-88:35 - Frans A. Oliehoek, Matthijs T. J. Spaan, Bas Terwijn, Philipp Robbel, João V. Messias:
The MADP Toolbox: An Open Source Library for Planning and Learning in (Multi-)Agent Systems. 89:1-89:5 - H. Brendan McMahan:
A survey of Algorithms and Analysis for Adaptive Online Learning. 90:1-90:50 - Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan:
A distributed block coordinate descent method for training l1 regularized linear classifiers. 91:1-91:35 - Shaobo Lin, Xin Guo, Ding-Xuan Zhou:
Distributed Learning with Regularized Least Squares. 92:1-92:31 - Srikanth Jagabathula, Lakshminarayanan Subramanian, Ashwin Venkataraman:
Identifying Unreliable and Adversarial Workers in Crowdsourced Labeling Tasks. 93:1-93:67 - Weiwei Liu, Ivor W. Tsang, Klaus-Robert Müller:
An Easy-to-hard Learning Paradigm for Multiple Classes and Multiple Labels. 94:1-94:38 - Dirk Tasche:
Fisher Consistency for Prior Probability Shift. 95:1-95:32 - Maximilian Schmitt, Björn W. Schuller:
openXBOW - Introducing the Passau Open-Source Crossmodal Bag-of-Words Toolkit. 96:1-96:5 - Junhong Lin, Lorenzo Rosasco:
Optimal Rates for Multi-pass Stochastic Gradient Methods. 97:1-97:47 - Morteza Ashraphijuo, Xiaodong Wang, Vaneet Aggarwal:
Rank Determination for Low-Rank Data Completion. 98:1-98:29 - Young Woong Park, Diego Klabjan:
Bayesian Network Learning via Topological Order. 99:1-99:32 - Julia Vinogradska, Bastian Bischoff, Duy Nguyen-Tuong, Jan Peters:
Stability of Controllers for Gaussian Process Dynamics. 100:1-100:37 - Aymeric Dieuleveut, Nicolas Flammarion, Francis R. Bach:
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression. 101:1-101:51 - Christophe Denis, Mohamed Hebiri:
Confidence Sets with Expected Sizes for Multiclass Classification. 102:1-102:28 - Sougata Chaudhuri, Ambuj Tewari:
Online Learning to Rank with Top-k Feedback. 103:1-103:50 - Thang D. Bui, Josiah Yan, Richard E. Turner:
A Unifying Framework for Gaussian Process Pseudo-Point Approximations using Power Expectation Propagation. 104:1-104:72 - Mengdi Wang, Ji Liu, Ethan X. Fang:
Accelerating Stochastic Composition Optimization. 105:1-105:23 - Leonard Hasenclever, Stefan Webb, Thibaut Liénart, Sebastian J. Vollmer, Balaji Lakshminarayanan, Charles Blundell, Yee Whye Teh:
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server. 106:1-106:37 - Mohammed Rayyan Sheriff, Debasish Chatterjee:
Optimal Dictionary for Least Squares Representation. 107:1-107:28 - Zuofeng Shang, Guang Cheng:
Computational Limits of A Distributed Algorithm for Smoothing Spline. 108:1-108:37 - Stephen H. Bach, Matthias Broecheler, Bert Huang, Lise Getoor:
Hinge-Loss Markov Random Fields and Probabilistic Soft Logic. 109:1-109:67 - Lin Lin, Jia Li:
Clustering with Hidden Markov Model on Variable Blocks. 110:1-110:49 - Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Q. Phung:
Approximation Vector Machines for Large-scale Online Learning. 111:1-111:55 - Hariharan Narayanan, Alexander Rakhlin:
Efficient Sampling from Time-Varying Log-Concave Distributions. 112:1-112:29 - Stanislas Lauly, Yin Zheng, Alexandre Allauzen, Hugo Larochelle:
Document Neural Autoregressive Distribution Estimation. 113:1-113:24 - Shannon Fenn, Pablo Moscato:
Target Curricula via Selection of Minimum Feature Sets: a Case Study in Boolean Networks. 114:1-114:26 - Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang:
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization. 115:1-115:52 - Naman Agarwal, Brian Bullins, Elad Hazan:
Second-Order Stochastic Optimization for Machine Learning in Linear Time. 116:1-116:40 - Jiahe Lin, George Michailidis:
Regularized Estimation and Testing for High-Dimensional Multi-Block Vector-Autoregressive Models. 117:1-117:49 - Zheng-Chu Guo, Lei Shi, Qiang Wu:
Learning Theory of Distributed Regression with Bias Corrected Regularization Kernel Network. 118:1-118:25 - Maren Mahsereci, Philipp Hennig:
Probabilistic Line Searches for Stochastic Optimization. 119:1-119:59 - Ricardo Silva, Shohei Shimizu:
Learning Instrumental Variables with Structural and Non-Gaussianity Assumptions. 120:1-120:49 - Mathieu Guillame-Bert, Artur Dubrawski:
Classification of Time Sequences using Graphs of Temporal Constraints. 121:1-121:34 - Jason D. Lee, Qihang Lin, Tengyu Ma, Tianbao Yang:
Distributed Stochastic Variance Reduced Gradient Methods by Sampling Extra Data with Replacement. 122:1-122:43 - Marco Singer, Tatyana Krivobokova, Axel Munk:
Kernel Partial Least Squares for Stationary Data. 123:1-123:41 - Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, David B. Dunson:
Robust and Scalable Bayes via a Median of Subset Posterior Measures. 124:1-124:40 - Fanny Yang, Sivaraman Balakrishnan, Martin J. Wainwright:
Statistical and Computational Guarantees for the Baum-Welch Algorithm. 125:1-125:53 - Christophe Dupuy, Francis R. Bach:
Online but Accurate Inference for Latent Variable Models with Local Gibbs Sampling. 126:1-126:45 - Valerio Perrone, Paul A. Jenkins, Dario Spanò, Yee Whye Teh:
Poisson Random Fields for Dynamic Feature Models. 127:1-127:45 - Eugène Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon:
Gap Safe Screening Rules for Sparsity Enforcing Penalties. 128:1-128:33 - Jihun Hamm:
Minimax Filter: Learning to Preserve Privacy from Inference Attacks. 129:1-129:31 - Théo Trouillon, Christopher R. Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, Guillaume Bouchard:
Knowledge Graph Completion via Complex Tensor Factorization. 130:1-130:38 - Yuting Ma, Tian Zheng:
Stabilized Sparse Online Learning for Sparse Data. 131:1-131:36 - K. S. Sesh Kumar, Francis R. Bach:
Active-set Methods for Submodular Minimization Problems. 132:1-132:31 - Jean-Baptiste Schiratti, Stéphanie Allassonnière, Olivier Colliot, Stanley Durrleman:
A Bayesian Mixed-Effects Model to Learn Trajectories of Changes from Repeated Manifold-Valued Observations. 133:1-133:33 - Stephan Mandt, Matthew D. Hoffman, David M. Blei:
Stochastic Gradient Descent as Approximate Bayesian Inference. 134:1-134:35 - Will Wei Sun, Lexin Li:
STORE: Sparse Tensor Response Regression and Neuroimaging Analysis. 135:1-135:37 - Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz:
A Survey of Preference-Based Reinforcement Learning Methods. 136:1-136:46 - Jérémie Bigot, Charles Deledalle, Delphine Féral:
Generalized SURE for optimal shrinkage of singular values in low-rank matrix denoising. 137:1-137:50 - Paulo Serra, Michel Mandjes:
Dimension Estimation Using Random Connection Models. 138:1-138:35 - Michael Riis Andersen, Aki Vehtari, Ole Winther, Lars Kai Hansen:
Bayesian Inference for Spatio-temporal Spike-and-Slab Priors. 139:1-139:58 - Gregory Darnell, Stoyan Georgiev, Sayan Mukherjee, Barbara E. Engelhardt:
Adaptive Randomized Dimension Reduction on Massive Data. 140:1-140:30 - Huishuai Zhang, Yingbin Liang, Yuejie Chi:
A Nonconvex Approach for Phase Retrieval: Reshaped Wirtinger Flow and Incremental Algorithms. 141:1-141:35 - Pietro Coretto, Christian Hennig:
Consistency, Breakdown Robustness, and Algorithms for Robust Improper Maximum Likelihood Clustering. 142:1-142:39 - Yining Wang, Adams Wei Yu, Aarti Singh:
On Computationally Tractable Selection of Experiments in Measurement-Constrained Regression Models. 143:1-143:41 - Yaoliang Yu, Xinhua Zhang, Dale Schuurmans:
Generalized Conditional Gradient for Sparse Estimation. 144:1-144:46 - Ruitong Huang, Tor Lattimore, András György, Csaba Szepesvári:
Following the Leader and Fast Rates in Online Linear Prediction: Curved Constraint Sets and Other Regularities. 145:1-145:31 - Guillaume Lecué, Shahar Mendelson:
Regularization and the small-ball method II: complexity dependent error rates. 146:1-146:48 - Raymond K. W. Wong, Thomas C. M. Lee:
Matrix Completion with Noisy Entries and Outliers. 147:1-147:25 - Kayvan Sadeghi:
Faithfulness of Probability Distributions and Graphs. 148:1-148:29 - James D. Wilson, John Palowitch, Shankar Bhamidi, Andrew B. Nobel:
Community Extraction in Multilayer Networks with Heterogeneous Community Structure. 149:1-149:49 - Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang:
On Binary Embedding using Circulant Matrices. 150:1-150:30 - James Hensman, Nicolas Durrande, Arno Solin:
Variational Fourier Features for Gaussian Processes. 151:1-151:52 - Andrew C. Heusser, Kirsten Ziman, Lucy L. W. Owen, Jeremy R. Manning:
HyperTools: a Python Toolbox for Gaining Geometric Insights into High-Dimensional Data. 152:1-152:6 - Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Mark Siskind:
Automatic Differentiation in Machine Learning: a Survey. 153:1-153:43 - Wesley Cowan, Junya Honda, Michael N. Katehakis:
Normal Bandits of Unknown Means and Variances. 154:1-154:28 - Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep Ravikumar, Ambuj Tewari:
Cost-Sensitive Learning with Noisy Labels. 155:1-155:33 - Yining Wang, Aarti Singh:
Provably Correct Algorithms for Matrix Column Subset Selection with Selectively Sampled Data. 156:1-156:42 - Elif Vural, Christine Guillemot:
A Study of the Classification of Low-Dimensional Data with Supervised Manifold Learning. 157:1-157:55 - Valeria Vitelli, Øystein Sørensen, Marta Crispino, Arnoldo Frigessi, Elja Arjas:
Probabilistic preference learning with the Mallows rank model. 158:1-158:49 - Frédéric Chazal, Brittany Fasy, Fabrizio Lecci, Bertrand Michel, Alessandro Rinaldo, Larry A. Wasserman:
Robust Topological Inference: Distance To a Measure and Kernel Distance. 159:1-159:40 - Mario Lucic, Matthew Faulkner, Andreas Krause, Dan Feldman:
Training Gaussian Mixture Models at Scale via Coresets. 160:1-160:25 - Vivek S. Borkar, Vikranth Reddy Dwaracherla, Neeraja Sahasrabudhe:
Gradient Estimation with Simultaneous Perturbation and Compressive Sensing. 161:1-161:27 - Clint P. George, Hani Doss:
Principled Selection of Hyperparameters in the Latent Dirichlet Allocation Model. 162:1-162:38 - Alan Morningstar, Roger G. Melko:
Deep Learning the Ising Model Near Criticality. 163:1-163:17 - Jacob Schreiber:
pomegranate: Fast and Flexible Probabilistic Modeling in Python. 164:1-164:6 - Qianxiao Li, Long Chen, Cheng Tai, Weinan E:
Maximum Principle Based Algorithms for Deep Learning. 165:1-165:29 - Xiao-Tong Yuan, Ping Li, Tong Zhang:
Gradient Hard Thresholding Pursuit. 166:1-166:43 - Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, Marco Pavone:
Risk-Constrained Reinforcement Learning with Percentile Risk Criteria. 167:1-167:51 - Siqi Wu, Bin Yu:
Local Identifiability of $\ell_1$-minimization Dictionary Learning: a Sufficient and Almost Necessary Condition. 168:1-168:56 - Fred Morstatter, Huan Liu:
In Search of Coherence and Consensus: Measuring the Interpretability of Statistical Topics. 169:1-169:32 - Fabrizio Angiulli:
On the Behavior of Intrinsically High-Dimensional Spaces: Distances, Direct and Reverse Nearest Neighbors, and Hubness. 170:1-170:60 - Yunwen Lei, Lei Shi, Zheng-Chu Guo:
Convergence of Unregularized Online Learning Algorithms. 171:1-171:33 - Jian Du, Shaodan Ma, Yik-Chung Wu, Soummya Kar, José M. F. Moura:
Convergence Analysis of Distributed Inference with Vector-Valued Gaussian Belief Propagation. 172:1-172:38 - Michael Freitag, Shahin Amiriparian, Sergey Pugachevskiy, Nicholas Cummins, Björn W. Schuller:
auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks. 173:1-173:5 - Sarah Nogueira, Konstantinos Sechidis, Gavin Brown:
On the Stability of Feature Selection Algorithms. 174:1-174:54 - Christopher Tosh, Sanjoy Dasgupta:
Maximum Likelihood Estimation for Mixtures of Spherical Gaussians is NP-hard. 175:1-175:11 - Oscar Hernan Madrid Padilla, James Sharpnack, James G. Scott, Ryan J. Tibshirani:
The DFS Fused Lasso: Linear-Time Denoising over General Graphs. 176:1-176:36 - Emmanuel Abbe:
Community Detection and Stochastic Block Models: Recent Developments. 177:1-177:86 - Rajen Dinesh Shah, Nicolai Meinshausen:
On $b$-bit Min-wise Hashing for Large-scale Regression and Classification with Sparse Data. 178:1-178:42 - Quanming Yao, James T. Kwok:
Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity. 179:1-179:52 - Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama:
Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios. 180:1-180:47 - Philipp Probst, Anne-Laure Boulesteix:
To Tune or Not to Tune the Number of Trees in Random Forest. 181:1-181:18 - Heng Lian, Zengyan Fan:
Divide-and-Conquer for Debiased $l_1$-norm Support Vector Machine in Ultra-high Dimensions. 182:1-182:26 - Zifan Li, Ambuj Tewari:
Beyond the Hazard Rate: More Perturbation Algorithms for Adversarial Multi-armed Bandits. 183:1-183:24 - Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong:
On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization. 184:1-184:24 - Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar:
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. 185:1-185:52 - Bruce E. Hajek, Yihong Wu, Jiaming Xu:
Submatrix localization via message passing. 186:1-186:52 - Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio:
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations. 187:1-187:30 - John Palowitch, Shankar Bhamidi, Andrew B. Nobel:
Significance-based community detection in weighted networks. 188:1-188:48 - Genki Kusano, Kenji Fukumizu, Yasuaki Hiraoka:
Kernel Method for Persistence Diagrams via Kernel Embedding and Weight Factor. 189:1-189:41 - Benjamin Guedj, Bhargav Srinivasa Desikan:
Pycobra: A Python Toolbox for Ensemble Learning and Visualisation. 190:1-190:5 - Simone Filice, Giuseppe Castellucci, Giovanni Da San Martino, Alessandro Moschitti, Danilo Croce, Roberto Basili:
KELP: a Kernel-based Learning Platform. 191:1-191:5 - Massil Achab, Emmanuel Bacry, Stéphane Gaïffas, Iacopo Mastromatteo, Jean-François Muzy:
Uncovering Causality from Multivariate Hawkes Integrated Cumulants. 192:1-192:28 - Jennifer Wortman Vaughan:
Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. 193:1-193:46 - Santtu Tikka, Juha Karvanen:
Enhancing Identification of Causal Effects by Pruning. 194:1-194:23 - Aryeh Kontorovich, Sivan Sabato, Ruth Urner:
Active Nearest-Neighbor Learning in Metric Spaces. 195:1-195:38 - Dimitris Bertsimas, Colin Pawlowski, Ying Daisy Zhuo:
From Predictive Methods to Missing Data Imputation: An Optimization Approach. 196:1-196:39 - Nicholas Boyd, Trevor Hastie, Stephen P. Boyd, Benjamin Recht, Michael I. Jordan:
Saturating Splines and Feature Selection. 197:1-197:32 - Andrei Patrascu, Ion Necoara:
Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization. 198:1-198:42 - Nihar B. Shah, Martin J. Wainwright:
Simple, Robust and Optimal Ranking from Pairwise Comparisons. 199:1-199:38 - David P. Helmbold, Philip M. Long:
Surprising properties of dropout in deep networks. 200:1-200:28 - Boris Konev, Carsten Lutz, Ana Ozaki, Frank Wolter:
Exact Learning of Lightweight Description Logic Ontologies. 201:1-201:63 - Shuhan Liang, Wenbin Lu, Rui Song, Lan Wang:
Sparse Concordance-assisted Learning for Optimal Treatment Decision. 202:1-202:26 - Junwei Lu, Mladen Kolar, Han Liu:
Post-Regularization Inference for Time-Varying Nonparanormal Graphical Models. 203:1-203:78 - Quan Zhang, Mingyuan Zhou:
Permuted and Augmented Stick-Breaking Bayesian Multinomial Regression. 204:1-204:33 - Ali Zarezade, Abir De, Utkarsh Upadhyay, Hamid R. Rabiee, Manuel Gomez-Rodriguez:
Steering Social Activity: A Stochastic Optimal Control Point Of View. 205:1-205:35 - Avik Ray, Joe Neeman, Sujay Sanghavi, Sanjay Shakkottai:
The Search Problem in Mixture Models. 206:1-206:61 - Jianqing Fan, Weichen Wang, Yiqiao Zhong:
An $\ell_{\infty}$ Eigenvector Perturbation Bound and Its Application. 207:1-207:42 - Jie Shen, Ping Li:
A Tight Bound of Hard Thresholding. 208:1-208:42 - Davoud Ataee Tarzanagh, George Michailidis:
Estimation of Graphical Models through Structured Norm Minimization. 209:1-209:48 - Christian Borgs, Jennifer T. Chayes, Henry Cohn, Nina Holden:
Sparse Exchangeable Graphs and Their Limits via Graphon Processes. 210:1-210:71 - Jiyan Yang, Yin-Lam Chow, Christopher Ré, Michael W. Mahoney:
Weighted SGD for $\ell_p$ Regression with Randomized Preconditioning. 211:1-211:43 - Hongzhou Lin, Julien Mairal, Zaïd Harchaoui:
Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice. 212:1-212:54 - Amichai Painsky, Naftali Tishby:
Gaussian Lower Bound for the Information Bottleneck Limit. 213:1-213:29 - Emmanuel Bacry, Martin Bompaire, Philip Deegan, Stéphane Gaïffas, Søren Poulsen:
tick: a Python Library for Statistical Learning, with an emphasis on Hawkes Processes and Time-Dependent Models. 214:1-214:5 - Hiroyuki Kasai:
SGDLibrary: A MATLAB library for stochastic optimization algorithms. 215:1-215:5 - Swapna Buccapatnam, Fang Liu, Atilla Eryilmaz, Ness B. Shroff:
Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks. 216:1-216:34 - Botao Hao, Will Wei Sun, Yufeng Liu, Guang Cheng:
Simultaneous Clustering and Estimation of Heterogeneous Graphical Models. 217:1-217:58 - Shusen Wang, Alex Gittens, Michael W. Mahoney:
Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging. 218:1-218:50 - Steffen Grünewälder:
Compact Convex Projections. 219:1-219:43 - Emilija Perkovic, Johannes Textor, Markus Kalisch, Marloes H. Maathuis:
Complete Graphical Characterization and Construction of Adjustment Sets in Markov Equivalence Classes of Ancestral Graphs. 220:1-220:62 - Zeyuan Allen-Zhu:
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. 221:1-221:51 - Alon Gonen, Shai Shalev-Shwartz:
Average Stability is Invariant to Data Preconditioning. Implications to Exp-concave Empirical Risk Minimization. 222:1-222:13 - Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford:
Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. 223:1-223:42 - Gunwoong Park, Garvesh Raskutti:
Learning Quadratic Variance Function (QVF) DAG Models via OverDispersion Scoring (ODS). 224:1-224:44 - Hafiz Tiomoko Ali, Romain Couillet:
Improved spectral community detection in large heterogeneous networks. 225:1-225:49 - Avanti Athreya, Donniell E. Fishkind, Minh Tang, Carey E. Priebe, Youngser Park, Joshua T. Vogelstein, Keith D. Levin, Vince Lyzinski, Yichen Qin, Daniel L. Sussman:
Statistical Inference on Random Dot Product Graphs: a Survey. 226:1-226:92 - Maik Döring, László Györfi, Harro Walk:
Rate of Convergence of $k$-Nearest-Neighbor Classification Rule. 227:1-227:16 - Brendan van Rooyen, Robert C. Williamson:
A Theory of Learning with Corrupted Labels. 228:1-228:50 - Sivan Sabato, Tom Hess:
Interactive Algorithms: Pool, Stream and Precognitive Stream. 229:1-229:39 - Virginia Smith, Simone Forte, Chenxin Ma, Martin Takác, Michael I. Jordan, Martin Jaggi:
CoCoA: A General Framework for Communication-Efficient Distributed Optimization. 230:1-230:49 - Likai Chen, Wei Biao Wu:
Concentration inequalities for empirical processes of linear time series. 231:1-231:46 - Bradley S. Price, Ben Sherwood:
A Cluster Elastic Net for Multivariate Regression. 232:1-232:39 - Zoltán Szabó, Bharath K. Sriperumbudur:
Characteristic and Universal Tensor Product Kernels. 233:1-233:29 - Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo I. Seltzer, Cynthia Rudin:
Learning Certifiably Optimal Rule Lists for Categorical Data. 234:1-234:78
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.