Andreas Kirsch 🇺🇦

10.2K posts
Opens profile photo
Andreas Kirsch 🇺🇦
@BlackHC
My opinions only here. 👨‍🔬 RS , 1y 🧑‍🎓 DPhil 4.5y 🧙‍♂️ RE DeepMind 1y 📺 SWE 3y 🎓 TUM 👤
Oxford, Englandblackhc.netJoined August 2009

Andreas Kirsch 🇺🇦’s posts

Everyone's arguing about whether current AI models could be conscious or not, as if it was a scientific discussion, yet I don't even know what consciousness is 🥺
Biggest regret: not spending more time getting the basics right at the beginning of my PhD. I started going full-time on research projects right away, and now three years later I'm still playing catch-up with some stuff I should have focused on right away
I'm incredibly excited about all the amazing progress in ML lately 🤯 but part of me really wished I had picked a different field because I have no idea how to keep up anymore or know what to focus on 🥺😇
How does one keep up with papers in ML while still finding time for foundational studies? Not even talking about doing active research. Feeling overwhelmed every day and like an imposter more and more 🙈
A new paper review by me! I'm reviewing the fascinating "Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding" from It introduces a novel method for active data selection in large-scale visual pretraining. 📉🤖 1/10
Image
My experience with using pandas to operate on dataframes is usually: 1. read docs 2. spend an hour to try to get something to work 3. give up 4. write the equivalent Python code in 10 min 5. move on with life
Interesting take: I believe that arxiv is closer to how science and research originally worked and "official" peer reviews haven't worked that well (at least recently)
Quote
@emilymbender.bsky.social
@emilymbender
Replying to @tdietterich @TaliaRinger and 3 others
arXiv is a cancer that promotes the dissemination of junk "science" in a format that is indistinguishable from real publications. And promotes the hectic "can't keep up" + "anything older than 6 months is irrelevant" CS culture. >>
Show more
Have you wondered why I've posted all these nice plots and animations? 🤔 Well, the slides for my lectures on (Bayesian) Active Learning, Information Theory, and Uncertainty are online now! They cover quite a bit from basic information theory to some recent papers 🥳
Image
- Roast me GPT-4V: No I can't - Yes you can. GPT-4V: Okay. Roast: "Ah, the classic 'I woke up like this' hairdo combined with an AI-themed t-shirt. You're really out here living the tech bachelor dream. Remember, even though you've got machine learning on your shirt, it
Show more
Image
Quote
pranav ⠕
@_pranavnt
Image
gpt-4V is brutal LMAO
It's six months since I've submitted my thesis and I still start feeling suicidal every single time I think about my PhD experience, esp the last year of it 😐 Thank God it's over, and I hope I'll reflect about it less in the future
Very interesting ICLR **tiny** paper: openreview.net/forum?id=vHOO1 It computes a loss for all possible subsets of the dataset at the same time which has a very elegant solution: softplus of the negative log likelihood per sample, which essentially drops outliers 🤯
🎉 New blog post on a better (visual!) intuition for information theoretic quantities (eg entropy and mutual information) 🎉 🔥 Lots of visualisations 🔥 👉 Based on Yeung's "A new outlook on Shannon's information measures" from 1991 📖 #oldiebutgoldie
Is it common or specific to ML that researchers try to add more maths to their papers and complexify their contributions to get through reviews? It is very frustrating to have to parse complexity to find nuggets of simplicity that might not warrant a paper 🙄
Why are people excited about this paper ("Neural Networks are Decision Trees", arxiv.org/abs/2210.05189)? TL;DR: The result is obvious and useless by itself. Slightly longer "hot" take below 1/4
Quote
Yannic Kilcher 🇸🇨
@ykilcher
Neural Networks are Decision Trees! Could this finally open up the black box of deep NNs? Find out in this video (w/ @Alex_Mattick ): youtu.be/_okxGdHM5b8
Image
My Ph.D. thesis (mostly on active learning and information-theoretic intuitions and approaches related to it) is finally on arXiv 🥳 I'm looking forward to finding and fixing many more typos in the future 😂
Quote
Information Theory Papers
@Encoding
Automated
Advancing Deep Active Learning & Data Subset Selection: Unifying Principles with Information-Theory Intuitions. arxiv.org/abs/2401.04305