RAVEN focuses on the time-scalability of DL models

I consider two aspects that RAVEN coincides with SimpleMachines’s post.

First, RAVEN targets the nonlinearity, sparsity and compatibility of accelerators to emerging neural network. SimpleMachines nicely summarise the recent development in DNNs with NMT as an example. Bert and LSTM based implementations of NMT pose different requirements to the hardware, which I believe fall into the realm of RAVEN.

Second, RAVEN is architected based on reconfigurable MAC arrays, introducing minimal programming overheads, which is also suggested by SimpleMachines.

More ideas about the scheduling of RAVEN will come later.

Di Wu
Di Wu
PhD student

A Wisconsin Badger in Computer Architecture!

Related