Artem Artemev (@aptemav) 's Twitter Profile
Artem Artemev

@aptemav

Machine Learning PhD @ImperialCollege

ID: 392650381

linkhttps://github.com/awav calendar_today17-10-2011 11:13:57

217 Tweet

206 Followers

403 Following

Arno Solin (@arnosolin) 's Twitter Profile Photo

My #ICML2020 tutorial videos on "Machine Learning with Signal Processing" are now freely available: I: slideslive.com/38930502/machi… II: slideslive.com/38930503/machi… III: slideslive.com/38930504/machi… IV: slideslive.com/38930505/machi… Slides: arno.solin.fi

My #ICML2020 tutorial videos on "Machine Learning with Signal Processing" are now freely available:
I: slideslive.com/38930502/machi…
II: slideslive.com/38930503/machi…
III: slideslive.com/38930504/machi…
IV: slideslive.com/38930505/machi…
Slides: arno.solin.fi
Vincent Adam (@vincentadam87) 's Twitter Profile Photo

Come and chat with the authors of our paper: Doubly sparse variational gaussian processes! aistats2020.net/poster_940.html #AISTATS2020 @stefanos_ele Artem Artemev Nicolas Durrande James Hensman @PROWLER_IO We are more friendly than we look in the video ;)

Mark van der Wilk (@markvanderwilk) 's Twitter Profile Photo

Tomorrow 10 Dec at 11am GMT I will speak at the Bayesian Deep Learning Meetup about **Bayesian Model Selection** and how it can help architecture search. In a short 20 minutes we will discuss why we (Bayesians ∪ Deep Learners) should care, and approaches from now and the past.

Tomorrow 10 Dec at 11am GMT I will speak at the Bayesian Deep Learning Meetup about **Bayesian Model Selection** and how it can help architecture search.

In a short 20 minutes we will discuss why we (Bayesians ∪ Deep Learners) should care, and approaches from now and the past.
Mark van der Wilk (@markvanderwilk) 's Twitter Profile Photo

Current Conjugate Gradient Gaussian Processes require manual tuning to trade off accuracy and speed. Existing guidelines can give suboptimal results, without clear warnings. Our method tunes automatically, runs fewer CG steps, and performs better: arxiv.org/abs/2102.08314 👇1/6

Current Conjugate Gradient Gaussian Processes require manual tuning to trade off accuracy and speed. Existing guidelines can give suboptimal results, without clear warnings.

Our method tunes automatically, runs fewer CG steps, and performs better: arxiv.org/abs/2102.08314 👇1/6
Mark van der Wilk (@markvanderwilk) 's Twitter Profile Photo

I'm looking forward to speaking tomorrow. I will share some thoughts on: - How Gaussian processes can help deep learning - Recent work on accurate GP inference - What makes a method "exact", and to what extent recent methods live up to this Link below if you want to join!

Mark van der Wilk (@markvanderwilk) 's Twitter Profile Photo

Join us to discuss Conjugate Gradient based GP approximations! We make training easier by automatically setting approximation parameters like CG tolerance using marginal likelihood bounds. Today 5pm (London) / 9am PDT. Long talk and poster available at icml.cc/virtual/2021/p….

Vincent Dutordoir (@vdutor) 's Twitter Profile Photo

We are organizing a small-scale, offline #NeurIPS2021 satellite event in Cambridge (UK) on the 8th of December. If you are interested in NeurIPS content and are in the neighborhood, this is your chance to connect with your local machine learning community neuripsmeetupcambridge.info

Mark van der Wilk (@markvanderwilk) 's Twitter Profile Photo

I am still welcoming PhD applicants for 2022 at Imperial College London. We are a growing research group, with clear goals on what new abilities we want to develop in ML and neural networks. Topics: Invariances, neural arch search, (Bayesian) model selection, Gaussian processes.

Alexander Terenin (@avt_im) 's Twitter Profile Photo

When working with a Gaussian process, have you ever wondered why Cholesky factorization failed, or a CG solve did not converge? Answer: it's because you've got redundant, overlapping data points. And that's just the starting point! On arXiv now! arxiv.org/abs/2210.07893