Energy efficiency is one of the most crucial design constraints of modern computing systems, which dictates performance, lifetime, form factor, and cost. For decades, energy efficiency improvements have largely been driven by Moore’s law until the 2000s. For the last decade, however, technology scaling has slowed down, and with the prediction of the “end of Moore’s law” in the near future, technology scaling-driven energy efficiency improvement has come to an end. “Approximate computing” is a new paradigm to accomplish energy-efficient computing in this twilight of Moore’s law by relaxing exactness requirement of computation results for intrinsically error-resilient applications, such as deep learning and signal processing, and producing results that are “just good enough.” It exploits that the output quality of such error-resilient applications is not fundamentally degraded even if the underlying computations are greatly approximated. This favorable energy-quality tradeoff opens up new opportunities to improve the energy efficiency of computing, and a large body of approximate computing methods for energy-efficient “data processing” has been proposed. In this talk, I will introduce approximate computing methods to accomplish “full-system energy-quality scalability.” It extends the scope of approximation from the processor to other system components including sensors, interconnects, etc., for energy-efficient “data generation” and “data transfer” to fully exploit the energy-quality tradeoffs across the entire system.