View profile

🏢 🚙 🤖 Issue #44: AI doesn't understand us, mobility -> fintech, my notes on media+future of work

🏢 🚙 🤖 Issue #44: AI doesn't understand us, mobility -> fintech, my notes on media+future of work
Welcome to my newsletter, where I discuss thoughts and news on the intersection of the built world and technology #retail #mobility #realestate #tech
Please contribute to the community by forwarding this to someone who might enjoy it also. 🙏🏼
You can follow me on Twitter here

You know what I'm saying? Cause AI doesn't
Giong sgihltly off pstie tihs mtnoh - you’ve probably seen this experiment before. Where the letters at the end of words remain the same and middle is scrambled - this experiment has been around since at least 1976 and proves that psycholinguistics are easily interprated by humans for a number of reasons. Humans have perception, common sense and intuition, a strange concept based on environmental and implicit learning.
Unfortunately, much of technologically derived communication is predominantly text based, whether its code in a compiler, search terms, a scraper or the acquisition of encyclopedic knowledge from the web. Human communication on the other hand, or anthroposemiotics for the budding etimologists, is more multivariate - not only are we able to communicate by text but also by voice, visual and nonverbal. This not only allows us to communicate with ourselves but with multiple people and groups. This rich communication and coordination is the primary reason humans are top of the food chain - we can coordinate in groups, share and triangulate knowledge and strategy in a way that other primates cannot. Enough about monkeys.
Unfortunately, for our micro-chipped friends, the computer, this rich tapestry of communication and understanding is currently unavailable to them. Whilst, in the last few years, we’ve made incredible leaps in terms of machine learning within bounded problems (finite tasks, finite results) - we’ve yet to truly see machine learning applied to truly unbounded problems.
One of the key difficulties is that most machine learning algorithms are trained within narrow problem spaces.
Machine learning today is very good at understanding defined taxonomies, though struggles with input which has yet to be defined, i.e:
“what colour is the sky not?”
Whilst, you and I would understand this to be every colour other than blue, or grey if you live in the UK, a computer might struggle to answer this as no one has strictly defined which colours the sky isn’t (maybe unsurprisingly).
A lot of this common sense is derived from broad understanding of subjects but also humans ability to understand implicit knowledge, this is knowledge gained incidentally and without awareness. This is a further depth of understanding which isn’t merely superficial and explicit — Kahneman calls this System 1 thinking.
Currently, machines are only partially extracting knowledge from text — in other words there is a superficiality and explicitness to their understanding of text based communication, computers may only be aware of the sky being blue.
Yejin Choi, associate professor at Washington University, is building a wide corpus of “common sense” knowledge for machines. Her team is looking to build a model which understands implicit knowledge from text, and plugs the gap between representation and knowledge, the difference between explicit and implicit knowledge. Yejin’s first paper, Verb Physics, is an attempt at inferring physical knowledge of actions and objects, on five different dimensions i.e: “Tyler entered his house” implies that his house is bigger than Tyler.
The ultimate goal is to have a broad benchmark dataset which multiple learning systems can pull from in order to plug implicit or System 1 machine knowledge gaps. Attempts such as this are looking to plug the quadrant of simple human problems and hard computer problems (“simple/hard”) — which as of today is partly limiting the breadth of AI applications and problems. Whilst, the threat of artificial general intelligence might seem imminent, certainly amongst pessimistic techno-fantasists. The reality is that these simple/hard problems still need to be codified — with most experts continually pushing out the timeframe to AGI until such problems are solved.
(Other) News
Mary Meeker’s 2019 Internet Trends Report - the annual report on the shifting sands of technology. I made some notes on relevant areas I’m interesting in like media, on-demand and new work modes:
  • Images are still the most popular basis of rich-media consumption // annual new photos taken increasing to > 1.3tr per annum // Instagram MAUs > 1bn, >50% of Twitter impressions are rich-media
  • Storytelling is shifting to new formats such as image, video and increasingly AR // Instagram + Snap Lens democratising image and AR creativity and expression
  • Negative content online is exacerbated by concentration of algo driven news outlets // 76% of the US regularly use FB,YouTube, TWTR for news // self-moderation and govt regulation becoming prevalent //as social growth decelerates 1% growth YoY, new social platforms emerge…
  • Gaming actives accelerating to 2.4bn users, gaming increasingly looking like social, Fortnite, Discord and Twitch all grown +200% YoY // 44% of Fortnite users have made a friend online
  • Voice: Podcast MAUs @ 70m // Amazon Echo Base install accelerated from ’17 to ’18 // Whilst media creation + consumption is up and to the right, increasingly people trying to kerb consumption w/ 63% of adults trying to limit usage
New Work Modes
  • On-demand consumers increased +200% in 2 years, from 25m to 56m // driven primarily by online marketplaces and transportation // on-demand platform workers rising @ 22% CAGR
  • Total % of purely remote workers up to 5% in 2018 // Slack, Google Sheets and Airtable top apps enabling online remote collaboration // Remote Workers do it for: 47% want flexible work hours, 30% want ability to travel
Tech / Venture
Processing power for deep learning models uses insane amount of power // NYC beats SF as the top tech city, with London third 🏴󠁧󠁢󠁥󠁮󠁧󠁿; “all I’m hearing is bla bla bla San Francisco sucks” // the only account you need to read on why Jony Ive left Apple //World’s least woke tech bro (Zuck) rethinks deepfakes, the debate is whether deepfakes are a new type of media altogether or a deliberate attempt to mislead? // State of AI: awesome overview of key trends and developments from the last year
Plus ça change plus c'est le même - Kapten and Bolt enter the ride-sharing fray in London. Great for consumers, terrible for subsidy driven business models and investors // Ride-sharing -> fintech, a transition we’ve seen in Asia by Grab and Go-Jek where much of banking is done through mobile wallets, now is coming to Uber and Grow // Uber now shows Lime scooters its in app as it doubles down on going multi-modal // Ride Report partners with Bird and Lime to provide cities with data // a dynamic map of micro-mobility players in Europe by Augustin Friedel!
Thanks for reading!
Did you enjoy this issue?
Sam Cash // Physical World Technologies Newsletter

The intersection of the physical world and technology; with a focus on future mobility,real estate, retail and cities.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue