JSON is now used everywhere, but especially in the context of http requests. It’s not a very old protocol (has been invented in early 2000s), and it’s not the most efficient one or the most complete one, but for some reason it’s the one that took off, and it’s still widely used.
The problem is: because it’s used so often, how much CPU is wasted around the world just to serialize/deserialize objects from JSON? In a paper from 2018
the author shows that in many big data applications, 80-90% of their execution time is just parsing the data, and other studies shows similar results. You can see the same problem when your application is growing and it’s returning a big json data via http and the client starts “suffering” from the parsing.
So, there are many solutions to the problem, but in this paper the author describes a new technique to parse (and validate) json structures with way better performance than other libraries, using the SIMD operations available in the latest CPUs.
This is a very interesting and technical reading that shows how a problem can be solved with a different strategy, and why this performance improvement can be very important for your application.