Traditional data management software and algorithms are built on the concept of persistent data sets that are reliably stored in stable storage and can be queried and updated several times throughout their lifetimes. In reality, however, data is rarely complete and stationary. In its latest report, the International Data Corporation predicts that in 5 years … Read more Why stream processing systems are now more relevant than ever
The past five years have seen a significant change in the way cloud servers look. Traditionally homogeneous cloud systems are progressively shifting to heterogeneous designs, either through special purpose chips, like Google’s TPUs, or reconfigurable fabrics, like Microsoft’s Catapult and Brainwave projects; sometimes, they even adopt a combination of the two. This post provides an … Read more The Increasing Heterogeneity of Cloud Hardware and What It Means for Systems
We build computer systems around abstractions. The right abstractions are timeless and versatile, meaning they apply in many different settings. Such abstractions make developers’ lives easier, and allow system developers and researchers to optimize performance “below the hood”, by changing systems without breaking the abstractions. In this post, I will argue that the abstraction of … Read more The Remarkable Utility of Dataflow Computing
The POPLmark Challenge helped stir lasting excitement about mechanized proofs within the PL community. Despite the advances and successes, there is a lot more to do. This post reflect on the state of affairs of mechanized proof in PL, organized around the topics and discussions that arose in the POPLmark 15 Year Retrospective Panel.