<progress id="hpfzt"><pre id="hpfzt"></pre></progress>
        <ruby id="hpfzt"></ruby>
        <big id="hpfzt"><p id="hpfzt"></p></big>

        <progress id="hpfzt"></progress>

        <big id="hpfzt"><pre id="hpfzt"></pre></big>

        <i id="hpfzt"></i>

            <strike id="hpfzt"><video id="hpfzt"><ins id="hpfzt"></ins></video></strike>
            <dl id="hpfzt"></dl>
                Mirror operated in collaboration with local support
                Full-text links:

                Download:

                Current browse context:

                cs.NE

                Change to browse by:

                References & Citations

                DBLP - CS Bibliography

                Bookmark

                (what is this?)
                CiteULike logo BibSonomy logo Mendeley logo Facebook logo LinkedIn logo del.icio.us logo Digg logo Reddit logo ScienceWISE logo

                Computer Science > Neural and Evolutionary Computing

                Title: A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning with Spike-Based Retinas

                Abstract: In an attempt to follow biological information representation and organization principles, the field of neuromorphic engineering is usually approached bottom-up, from the biophysical models to large-scale integration in silico. While ideal as experimentation platforms for cognitive computing and neuroscience, bottom-up neuromorphic processors have yet to demonstrate an efficiency advantage compared to specialized neural network accelerators for real-world problems. Top-down approaches aim at answering this difficulty by (i) starting from the applicative problem and (ii) investigating how to make the associated algorithms hardware-efficient and biologically-plausible. In order to leverage the data sparsity of spike-based neuromorphic retinas for adaptive edge computing and vision applications, we follow a top-down approach and propose SPOON, a 28-nm event-driven CNN (eCNN). It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm. With an energy per classification of 313nJ at 0.6V and a 0.32-mm$^2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators while embedding on-chip learning and being compatible with event-based sensors, a point that we further emphasize with N-MNIST benchmarking.
                Comments: Accepted for presentation at the IEEE International Symposium on Circuits and Systems (ISCAS) 2020
                Subjects: Neural and Evolutionary Computing (cs.NE); Emerging Technologies (cs.ET); Image and Video Processing (eess.IV)
                Cite as: arXiv:2005.06318 [cs.NE]
                  (or arXiv:2005.06318v1 [cs.NE] for this version)

                Submission history

                From: Charlotte Frenkel [view email]
                [v1] Wed, 13 May 2020 13:47:44 GMT (2594kb,D)
                ƴϷ