🛰 Software Engineering Licenses, AI Chips, Negative Power Prices – IOP Observatory #21



Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that The VUCA Observatory will receive your email address.

April 11 · Issue #21 · View online
The VUCA Observatory
glad you could make it.
We’re a bit late today, so let’s get right to it: Security is complex, so do we need to up the requirements for programmers? That’s the counter-argument to the everybody needs to code-rallying cry, and something worth pondering.
Further, Moore’s law seems to come to an end, with AI as driving force for alternative chip designs. And California is suprised that when you build out lot’s of renewables, you might end of with negative electricity prices.
Onto the Observatory.
(And don’t forget: if you like this, please tell your friends and coworkers.)

Let’s Talk Security (again)
I guess one of the last things you want to hear on a Friday night is the sound of civil defence sirens. If you’re a resident of Dallas, you weren’t so lucky. There, the Tornado sirens had been activated, and went, for just a little under two hours. Initially attributed to a hacked computer network, the story gets even worse. The emergency system used unencrypted radio signals, and what we have to assume are pranksters only had to broadcast a corresponding signal for the sirens to go off.
Keep this story in mind the next time someone speaks gushingly about the potential of Smart Cities. Yes, there’s tremendous value in connecting cities, but don’t forget we’re often dealing with administrations that don’t see the value in paying extra to have the broadcast signal for critical infrastructure like the civil defence siren system encrypted.
Not that encryption is the silver-lining either. As Lewis Freiberg shows quite compellingly, Smart City networks, by there very nature of being out there in the city, need to be designed with a different security mindset. One that’s obvious, but then again, not obvious enough: you need to physically secure your infrastructure.
Security is a multi-layered problem, that evades easy solutions. And it’s that very interdependence and layer upon layer of problematic-at-best decisions of the past that leave us in the poor IT security environment we find ourselves in right now. If even the Economist has a title story on the problems of the industry, given what else is happening in the world – have you looked around lately? – you might argue you have a bit of a problem at your hands.
The Economist wonders whether mandatory licensing of professional qualification might be a way to start tackling the problem of poor software quality, or whether software liability would have to do the trick. Both can be massively disruptive to the way we conceive of new technology, the way we think of tech, but given NYMags investigation into the outsize effect Stack Overflow has on our world by virtue of the programming advice doled out there, you have to wonder whether some form of licensing requirement might indeed be prudent. This would of course dispel notions of digital disruption quite quickly. After all, the amazing thing of tech that made so much of the progress of the last 3 decades possible is that it’s easy and quick to experiment, and that you don’t have to ask for permission.
Then again, maybe the problem of poor IoT security solves itself, as apparently someone set out to intentionally brick connected products with flagrant security holes. And if that doesn’t work, there’s still AI to fall back on, right? After all, it’s good to catch whistleblowers already.
AI and the End of Moore’s Law
The funny thing about Moore’s law – you know, the one about chips doubling performance every two years – is that it was always about CPUs. Because why would you need more than one general purpose chip if that got consistently better, at an exponential rate. But of course, the world doesn’t work that way, and while we wrote off chip-level optimisation for a while (why would you invest in that if the passage of time and the tic-toc of Moore’s Intel would provide better performance improvements), we’re coming off the tail-end of that particular tech train.
In the meantime, mobile with its constraints on power consumption, and gaming and lately machine learning, with their unique set of requirements as to which computations are required, namely roughly the same one, over and over again rather quickly, have lead to a surprising diversification, and a re-bundling of chip technology within larger technology firms. Apple has always been known to try to control as many of the critical parts of their products as possible, and I consistently argue that the acquisition of PA Semi in 2008 was the most important M&A activity just after buying NeXT.
So it’s unsurprising that Apple is pursuing their own GPU, given that with their stance on privacy they don’t have the luxury of just cramming a warehouse full of nVidia units and run their machine learning models there, but instead have to do that on device. And a solid GPU is good for more things than just machine learning. Apple is rumoured to introduce some sort of Augmented Reality product fairly soon, after all. The stock of Apple’s current core supplier of GPU tech took a dive, of course, but that’s the fate a lot of fabless chip designers will have to confront in the near future.
And even Google is upping its game in chip design. Last year they introduced a proprietary design they call the TPU, the Tensor Processing Unit, custom-built for their Tensorflow machine learning system. Compared to the current state of the art in CPU design, it’s almost crude. Based on the 28nm process node, it still vastly outperforms even GPUs in machine learning tasks.
Alan Kay famously quipped that people that are serious about software should design their own hardware. The economic end of Moore’s law in CPUs seems to drive a lot of hardware innovation, so expect more of this in the near future, especially with all the exciting work around machine learning, self-driving etc still ahead of us, and potentially more distributed.
And with all the excitement around chip design, let’s not forget that the underlying tasks and techniques of how data acquisition and machine learning should be done are far from settled. Google (again) published quite an interesting approach with their federated machine learning, which should, at least in theory, be even more protective of users’ privacy in that no user data gets sent to the cloud at all. All that’s synced is the delta to the downloaded model after additional learning on the users device.
Electricity Markets are fun
Just coming off an intense workshop last week working out scenarios for future electricity markets which are very likely going to be dominated by much more volatility than we are used to even now, something that we talked about here in terms of base-cost renewables, I can’t help but chuckle how surprised the press, and the public at large, is when things that are blindingly obvious happen.
If you’re building out large amounts of renewable energy, you are ceding partial control over the supply side of your electricity grid to the forces of nature. You are well aware of that in advance, and ideally, you have a strategy in place mitigating the worst effects this could have on your electricity grid. You want to still be able to operate that within a really rather not that forgiving margin right around 50Hz of frequency, which means that supply and demand need to be matched. Electricity that is fed into the grid needs to be taken off the grid, too, at the very same moment. Otherwise things tend to start to explode.
Now in many parts of the world, there are markets in electricity, where you buy and sell according to marginal cost, and, crucially, demand. You see where we’re going with this? California just experienced its first negative prices on the spot market, where the supply of electricity is so abundant that you have to pay people to take the power off the grid. In Germany, meanwhile, we’re expecting to hit 100 hours of negative spot prices this year.
What this points to is a failure of increasing demand-side flexibility across the system. Better market designs are going to be necessary for that. If your system requires more flexibility, and chooses prices as the means to communicate changes in the underlying capacity, but half of the system can’t react to price changes, then maybe it’s time to have a hard think again.
But it just so happens that there’s a potentially huge, new, market for electricity that’s just about to come online. We’re talking of course of electric cars, that, depending on your perspective, could save electric utilities from a race to the bottom of flattening demand given efficiency measures, or put undue strain on already ageing infrastructure.
But it should be a fertile ground to experiment with more flexible pricing schemes and market designs, and the Rocky Mountain Institute has looked at tariffs for EV charging and come up with a couple of interesting guidelines.
In the meantime, the UN has produced a study confirming what we all know: we’re installing more renewables than ever, and we’re doing it cheaper than ever. A lot of that is driven by China producing staggering amounts of Solar PV gear.
The Strange, Weird, and Interesting
End notes
That’s it for this week. Hope you enjoyed it.
It’s Easter Weekend coming up, and after that I’ll be travelling (I’ll be in San Francisco April 24th-27th. If you happen to be there, let me know! We should meet) so the Observatory might become a little less regular than I’d like over the next two weeks.
Either way… Cheers!
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue