Vengineerの妄想(準備期間)

人生は短いけど、長いです。人生を楽しみましょう!

C4ML


C4ML workshop at CGO 2019、2019.02.17)が現地時間的には今頃やっているんですかね。


スライドとビデオ公開に期待しちゃいますよ。=> スライドは公開されましたが、ビデオ公開はないようです。


 ・"Getting to Machine Learning from a General Purpose Compiler", Keno Fischer & Jameson Nash, Julia Computing

 ・"A Programming Language and Compiler View on AI Systems", Tiark Rompf, Purdue

 ・"TVM: An Automated End-to-End Optimizing Compiler for Deep Learning", Tianqi Chen, University of Washington



 ・"nGraph: Unlocking Next-Generation Deep Learning Performance with Compilers", Jayaram Bobba, Intel


 ・"The Sparse Tensor Algebra Compiler", Saman Amarasinghe, MIT





なんか、凄いですね。

(Julia)、TVM、Glow、XLA、nGraph、TensorRT、PlaidMLまでは、知っていましたが、
その他にも色々あるんですね。。。NVIDIADieselもあった。
 ・"Polyhedral Compilation of ML Computation Graphs", Vinod Grover, Nvidia
これ!

MITの"The Sparse Tensor Algebra Compiler"のThe Tensor Algebra Compiler (taco)

本会議のCGO2019でも、
 Tiramisu: A Polyhedral Compiler with A Scheduling Language for Targeting High Performance Systems
の発表がありますね。

"XLA and lessons learned", Bjarke Roune (presenter), and everyone on the XLA team, Google

XLA is a compiled high-performance backend for ML systems like TensorFlow and PyTorch with backends for CPU, GPU, TPU and other backends in development. This talk will go through how XLA works, some of the philosophy behind XLA some of the lessons we learned building it, including some of our experiences developing the TPU backend. Topics will include operator fusion, static versus dynamic shapes and challenges in targeting a brand-new accelerator type, like how we have maintained high productivity while writing code for TPUs that require pipelining in software of many different kinds of resources.
ともあるし。

最後の
"MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law", Chris Lattner, Jacques Pienaar, and everyone on the MLIR team, Google