View profile

Matt's Thoughts In Between - Issue #68

Revue
 
This week: YouTube and radicalisation; the danger of AI in a world of low attention; bringing the VC
 
June 11 · Issue #68 · View online
Matt's Thoughts In Between
This week: YouTube and radicalisation; the danger of AI in a world of low attention; bringing the VC house down; and more…

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
YouTube, regulation and radicalisation
The NYT published an excellent (and beautifully presented) piece on the role that YouTube may play in introducing people to far right ideas. Kevin Roose looks at one young man who moved from liberal to alt-right ideas and back again, apparently largely driven by YouTube (see here for interesting additional sociological commentary). Roose looks at the full 12,000 video viewing history that marked this journey. Do read the whole thing.
The most interesting part of the story is the sheer power of YouTube’s “Watch Next” algorithm. Apparently, these recommendations now drive 70% of total watch time. Any suggestion that YouTube deliberately promotes far right (or left) ideas is silly - but the article illustrates the impossibility of a platform of YouTube’s scale making “neutral” engineering decisions. Even an apparently innocuous choice - such as whether to optimise the algorithm for clicks or for watch time - has important real-world consequences.
In such a world it’s easy to see why the Internet giants may prefer regulation to having to make difficult - and highly political - decisions themselves. There are no obvious quick wins. The problem doesn’t go away if YouTube switches its business model or if we break up Google. We’ve accidentally created a world that can’t avoid “Sorting by Controversial” - still the best piece for understanding the untameable monster internet platforms have unleashed. I doubt it’s reversible - so it’s time to think hard about how our institutions have to evolve to contain it. 
Paying attention in an era of AI
I talked a few weeks ago about OpenAI’s new natural language generation model, GPT2 (This was the model dubbed “too dangerous to release). My colleague Ben shared this week this interesting short essay on GPT2 and its implications by Sarah Constantin. Constantin notes that GPT’s output is just good enough that it can be mistaken for a human if you’re not concentrating - but it breaks down pretty quickly when subjected to scrutiny.
This matters for two reasons. One, it points to a useful framework for thinking about the current state of machine learning: our models are now good at what would be “effortless” pattern recognition for humans, but not yet great at anything “effortful” (Constantin has another excellent piece on this distinction).
Second, very often today we are not, in fact, concentrating. This is arguably what makes GPT2 dangerous: the ability to produce content that might not stand up to scrutiny but can be done so cheaply and scaleably that it accelerates the spread of misinformation (I’m reminded of Benedict Evans’ analogy of machine learning as being like having a million interns). Constantin is more optimistic than I am that living in such a world will improve our powers of concentration. But what if the opposite is true? What if, as Venkatesh Rao suggests, are our powers are degrading just as automation makes them more important? 
Will Softbank's Vision Fund bring the VC house down?
I’ve talked before about the SoftBank Vision Fund (VF) and its extraordinary power in the world of venture capital. This week the FT reported on an unusual financial manoeuvre from the VF: it will borrow $4bn against its portfolio in order to be able to return cash to investors. Its unlikely that this represents a return of principal capital. The VF’s unusual structure means it owes over $3bn of debt coupon payments each year.
This is not a bullish sign, and comes against a backdrop of news that SoftBank is struggling to raise VF II (though not for the geopolitical reasons - backlash against Saudi Arabia - that once seemed possible). This matters because, as one analyst has put it, if the VF fails, it may bring the venture capital house down. So many of the brightest stars in startup land are currently sitting on valuations set by the VF - and yet its portfolio is an increasingly mature “basket of negative cash flows”.
The challenge is that VF’s existence has allowed high potential private companies to avoid price discovery for a long time, which can lead to a painful reckoning when the moment comes, as Uber is discovering. The argument from some in Silicon Valley is that staying private is a good thing, as going public too soon disincentives innovation and long term thinking. It will be interesting to see if new institutions like the Long Term Stock Exchange change that (though it has its critics). Whatever happens to the VF, alternative sources of late stage finance - and financial discipline - are certainly welcome.
Quick Links
  1. IoT dystopia - airbag edition. Don’t plug cheap USB adapters into your car
  2. I can see clearly now. Which cities have the highest number of “pleasant days”?
  3. In a nutshell. What one quote best summarises your philosophy? Strong Twitter thread.
  4. AI summer. What is the carbon footprint of training an ML model?
  5. Linguistic diversity. The wonderful nuance in words for “debate” in Hungarian (Better than it sounds!)
Your feedback
Thanks for reading Thoughts in Between. I’d love it if you’d forward this to someone who might enjoy it - it’s the easiest way to help grow the community. Feel free to hit Reply if you have any comments - or talk to me on Twitter.
Until next week,
Matt
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue