View profile

Artyom's links – January 24, 2020

Artyom's links – January 24, 2020
By Artyom Kazak • Issue #2 • View online

AI Risk for Computer Scientists is a workshop run by MIRI every 1–2 months. If you are familiar with the rationalist sphere, work as a programmer/mathematician or study something related, and are interested in AI safety (even if skeptical), this is a great opportunity to go to Berkeley – all expenses paid! – and spend four days around awesome people while being force-fed a mix of rationality techniques and AI safety research.
If you want to go but wonder whether you are a good fit: I have no clue. MIRI people have some kind of sophisticated agenda behind this, so it’s probably better to apply anyway and let them decide.
In Praise of Fake Frameworks tries to briefly describe a very important Kuhnian idea. Specifically: if you really want to discover truths, it is not enough to just always adopt the most correct model you know. Instead you should adopt several models – even ones you know are wrong – and follow their suggestions.
Why? Each model – e.g. {Kegan’s stages, personality disorders, left/right hemisphere thinking, introversion and extroversion, maybe even the four elements} – will lead to different observations and different hypotheses. And you desperately need more observations, because you don’t have enough. Here is a Hacker News quote to illustrate the mindset I’m talking about:
Since our time is so limited, I think it is a good heuristic to avoid any non-fiction book that it is known to contain errors
If you want to get even remotely close to the cutting edge of anything, this is probably the worst piece of advice you can get.
How Doomed are Large Organizations? is the latest post in Zvi’s Immoral Mazes series. It makes a pretty convincing case for not trying to build large organizations – I am actually half-convinced (which is surprising since “I want my own Google” used to be my explicit goal). It also gives a nicely understandable model for why large organizations and whole civilizations eventually die – because the maze-ification is inevitable due to <reasons> and also nearly irreversible.
Right Hemisphere Neglect (my own post) talks about the different ways I am not using the entire right half of my brain. I’m slightly kidding, but mostly not. And it’s sad.
I wrote the post after getting through the first 15% of Iain McGilchrist’s The Master and His Emissary (available on LibGen), where he talks about the modes of thinking employed by the left/right hemisphere.
Right hemisphere – {understanding people’s gestures and facial expressions; feeling the need to belong; listening to your intuition without feeling guilty and/or inventing rationalizations} – fuck all that. Left hemisphere – {taking everything literally; being scared of unfamiliar things; biting philosophical bullets} – give me more. Eek.
And now for something completely different: Why Academics Stink at Writing by Steven Pinker. It might seem to be about writing – just like The Life-Changing Magic of Tidying Up is about tidying up – but I think there is a more important lesson there: if you pay close attention to why you are compelled to write badly / make a mess / etc, and overcome yourself, you will achieve something that is very hard to achieve otherwise. Tweet at me if you get what I’m talking about and have more examples.
Did you enjoy this issue?
Artyom Kazak

Artyom's links

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue