New Focus for Lattice
A few years ago, Lattice Semiconductor attempted to broaden its product breadth by going beyond Field Programmable Gate Arrays (FPGAs) and acquired assets leading to the debut of various consumer electronics-oriented products (e.g. USB-C, and HDMI). While the new products opened new doors for the company, it came with a high price. The new focus areas proved to have much lower ROI compared to the core (FPGA) business. Additionally, the technological diversity proved to be a burden for the R/D team. Consequently, the company became less able to capitalize on major FPGA opportunities and that took a toll on the profitability.
As a result, there was a need for a new strategy and a new management team. Jim Anderson (having roots at AMD) took the helm of the company in September of 2018 and engineered an overhaul to revamp company’s product strategy. It turned out that the right recipe for Lattice was to narrow its focus and stick to its core knitting (FPGAs) and do away with everything else. His hand-picked executive team became laser-focused on building on top of the highly lucrative FPGA business. The strategy has worked. Profitability has improved and company’s stock has nearly tripled. The claim of fame for Lattice’s FPGA technology is being the “lowest power” filed programmable solution out there. While Xilinx and Intel are building higher-end products consuming hundred Watts, Lattice is an expert in building smaller and less expensive solutions ranging from 1mW to 1W. The market size of this segment (SAM) is nothing to sneeze at and will reach nearly $4B by 2023 giving the company plenty of room to grow.
So, what does this have to do with AI?
The company has opted to pursue a platform-based approach in developing products, where a single architecture is used to develop multiple families each catering a specific application. Their customers were having difficulty building system-level solutions using FPGAs with generic architecture. The new paradigm incorporated various bells and whistles that are much valued in specific target applications. The company’s new product roadmap consists of three product families addressing specific needs for applications in Embedded Vision, Artificial Intelligence, and Security. Few terms come to my mind when I hear the name Lattice, and they are “field programmability”, “low power”, “AI”, “embedded vision”, and “security”. All of the above are essential ingredients in an ideal AI inference engine. While I can’t claim that their strategy will win hearts and minds in non-AI applications, I am pretty confident that Lattice will be a strong voice in edge inference applications.
Sudden Drop in Server Shipments
Fascinating article in The Next Platform (see here) reports that the unit shipment of servers in Q1 of 2019 dropped 5.1% compared to the same period in 2018 (2.58M vs. 2.7M). Below I have captured a few key findings in this article:
One possible reason may be attributed to overspending of hyperscalers and cloud builders in 2017 and 2018
The silver lining is that the revenues (thanks to IDC estimates) increased 4.4% reaching $19.8B despite a drop in units shipped. The increase in average selling price (ASP) can be attributed to increased demand for more memory, GPU, and FPGA accelerators
The price war between AMD (“Rome” Epyc processor) and Intel (“Ice Lake” Xeon) is heating up and that will certainly reduce gross margins for both players. It remains to be seen that the volume increase in server shipments will be large enough to preserve the absolute profits of Intel and AMD
Dell, HPE, Lenovo, Cisco, and IBM are the OEM leaders (Dell being the largest). Their aggregate revenues in Q1 were $10.5B 53% of the market). The revenues for “ODM Direct” and “Others” categories sums up to $9.2B (47% of the market
A sudden drop in server unit shipments is counterintuitive to me. I would have expected that emergence of AI (as a relatively new application for servers) would boost server unit shipments. Although one can easily upgrade the installed base by adding PCIe AI accelerator modules alleviating the need for new machines. This slowdown was also mentioned in Broadcom’s depressing quarterly earning call (yesterday).
Tidbits about DinoPlusAI
I had the privilege of seeing Jay Hu’s (DinoPlusAI’s Founder/CEO) presentation covering the company and its technology. I was able to capture the following tidbits from his presentation:
- The key differentiator for DinoPlusAI’s AI accelerator chips is low latency which is clearly a critical metric for all edge applications
- The company has raised $6.5M in 2017 from the likes of:
- Perceptin (Visual perception technology for robots and autonomous vehicles)
- Inspur (Servers)
- Rokid (Robotics company based in China)
- Ford (Automotive)
- MEGVII (Smart city/IoT company based in China)
- SiFive (Processor IP)
- The company has commitment for $8M in revenues (in the form of letter of Intent)
- They have 6 top-tier design wins
- They claim having an optimal solution for 5G edge applications thanks to their low latency
- Their solution consumes 45W compared to 300W for Tesla V100 (running similar workloads)
- The addressable market size for AI Edge Inference chipsets will reach $66B by 2025
Mythic’s New Funding
Congratulation to folks at Mythic for raising $30M (Series B-1) led by Valor Equity Partners, with new investor Future Ventures, Atreides, Micron Ventures, and Lam Research.