Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap

Created by W.Langdon from gp-bibliography.bib Revision:1.4394

@InProceedings{7987592,
  author =       "Muhammad Shafique and Rehan Hafiz and 
                 Muhammad Usama Javed and Sarmad Abbas and Lukas Sekanina and 
                 Zdenek Vasicek and Vojtech Mrazek",
  title =        "Adaptive and Energy-Efficient Architectures for
                 Machine Learning: Challenges, Opportunities, and
                 Research Roadmap",
  booktitle =    "2017 IEEE Computer Society Annual Symposium on VLSI
                 (ISVLSI)",
  year =         "2017",
  pages =        "627--632",
  address =      "Bochum, Germany",
  month =        "3-5 " # jul,
  publisher =    "IEEE",
  keywords =     "genetic algorithms, genetic programming",
  DOI =          "doi:10.1109/ISVLSI.2017.124",
  size =         "6 pages",
  abstract =     "Gigantic rates of data production in the era of Big
                 Data, Internet of Thing (IoT)/Internet of Everything
                 (IoE), and Cyber Physical Systems (CSP) pose
                 incessantly escalating demands for massive data
                 processing, storage, and transmission while
                 continuously interacting with the physical world under
                 unpredictable, harsh, and energy-/power-constrained
                 scenarios. Therefore, such systems need to support not
                 only the high performance capabilities at tight
                 power/energy envelop, but also need to be
                 intelligent/cognitive, self-learning, and robust. As a
                 result, a hype in the artificial intelligence research
                 (e.g., deep learning and other machine learning
                 techniques) has surfaced in numerous communities. This
                 paper discusses the challenges and opportunities for
                 building energy-efficient and adaptive architectures
                 for machine learning. In particular, we focus on
                 brain-inspired emerging computing paradigms, such as
                 approximate computing; that can further reduce the
                 energy requirements of the system. First, we guide
                 through an approximate computing based methodology for
                 development of energy-efficient accelerators,
                 specifically for convolutional Deep Neural Networks
                 (DNNs). We show that in-depth analysis of data paths of
                 a DNN allows better selection of Approximate Computing
                 modules for energy-efficient accelerators. Further, we
                 show that a multi-objective evolutionary algorithm can
                 be used to develop an adaptive machine learning system
                 in hardware. At the end, we summarize the challenges
                 and the associated research roadmap that can aid in
                 developing energy-efficient and adaptable hardware
                 accelerators for machine learning.",
}

Genetic Programming entries for Muhammad Shafique Rehan Hafiz Muhammad Usama Javed Sarmad Abbas Lukas Sekanina Zdenek Vasicek Vojtech Mrazek

Citations