View profile

PCA(news) January 1 - January 19

PCA(news) is a weekly newsletter with the most interesting conversations on Twitter. Each issue highl
PCA(news) January 1 - January 19
By Andrew Mauboussin • Issue #4 • View online
PCA(news) is a weekly newsletter with the most interesting conversations on Twitter. Each issue highlights 5-10 arXiv papers along with the most notable tweets mentioning each paper.
To select the content, we start with a human-curated list of academics and other notable people in the machine twitter community and then look at their outgoing Twitter engagements (accounts they follow and tweets they like, retweet or reply to). Tweets that receive the most engagement from this group are more likely to be selected for the newsletter.
Feel free to contact me on Twitter @amaub or reply to this email with any feedback or suggestions!

Rachel Thomas
I think:
1) Regulation is necessary to address most ethics/justice/human rights issues related to data & ML
2) Internal company efforts are necessary too (there will always be gaps & lags in law)

What have you seen on how to achieve 2) without undermining efforts for 1)?
Deb Raji
New year, new papers!

“Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing”
https://t.co/UcQTUanEvZ

“Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing”
https://t.co/qyGrhn3hHF
hardmaru
If you are interested in working with doodles and sketches rather than just pixel photos, check out this survey of Deep Learning for Free-Hand Sketch, by Xu et al., with 240 references to what used to be a niche area of machine learning and computer vision https://t.co/hpaFqCzcdT https://t.co/7XaFLM05hx
David Duvenaud
Training Neural SDEs: We worked out how to do scalable reverse-mode autodiff for stochastic differential equations. This lets us fit SDEs defined by neural nets with black-box adaptive higher-order solvers.
https://t.co/9Wc6d0Vjbt
With @lxuechen, @rtqichen and @wongtkleonard. https://t.co/qlUwMxezjO
DeepMind
How can we predict and control the collective behaviour of artificial agents? Classical game theory isn't much help when there are >2 agents. In our @iclr_conf paper, we find markets impose useful structure on interactions between gradient-based learners: https://t.co/QAmWi2MDCF https://t.co/IeLMcb9f2z
Privacy Matters
Paper: #DarkPatterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence https://t.co/df6q0RuZg2 <this should be required reading for EU DPAs
Microsoft Research
How does deep learning perform DEEP learning? Microsoft and CMU researchers establish a principle called "backward feature correction" and explain how very deep neural networks can actually perform DEEP hierarchical learning efficiently: https://t.co/9EtkaThXAT @ZeyuanAllenZhu
Ilya Sutskever
https://t.co/44pZXB4ejS
Cool theory paper presenting a problem that:
- can be efficiently learned by SGD with a DenseNet with x^2 nonlin,
- cannot be efficiently learned by any kernel method, including NTK.
Sebastien Bubeck
Exciting start of the year for theory of #DeepLearning! SGD on neural nets can:
1) simulate any other learning alg w. some poly-time init [Abbe & Sandon https://t.co/cjQ21TAkzh]
2) learn efficiently hierarchical concept classes [@ZeyuanAllenZhu & Y. Li https://t.co/TheMl9zYeR]
Sebastian Risi
Happy our new paper "Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples" w/ @chelmyers, @efreed52, @larispardo, @anushayfurqan, and @jichenz is now on arXiv: https://t.co/ETkpNKjTAv @ITUkbh @DrexelUniv https://t.co/KhKHhTx13N
David Pfahler
A ResNet variant called RevNet has constant memory footprint independent of depth. Can this actually be true? Seems like a big deal. https://t.co/aDWPuB1vFE
Thomas Lahore
What an elegant idea:

Choosing the Sample with Lowest Loss makes SGD Robust

"in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample"

https://t.co/mwZjnhJy92
Sanja Fidler
Current image datasets are getting very large and pre-training on them is time consuming. We are releasing Neural Data Server (NDS), a search engine for transfer learning data! @yanxi0830 @davidjesusacu #UofT
Webservice: https://t.co/3vg5uRlk2U
Paper: https://t.co/OEFWJKHETH https://t.co/i5od82gekd
Did you enjoy this issue?
Andrew Mauboussin

PCA news highlights the most interesting machine learning conversations on Twitter each week.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue