• That’s capturing everything. Ultimately you need only a tiny fraction of that data to emulate the human brain.

    Numenta is working on a brain model to create functional sections of the brain. Their approach is different though. They are trying to understand the components and how they work together and not just aggregating vast amounts of data.

      • Think of this:

        You find a computer from 1990. You take a picture (image) of the 1KB memory chip which is on a RAM stick, there are 4 RAM sticks. You are using a DSLR camera. Your image in RAW comes out at 1GB. You project because there’s 8 chips per stick, and 4 sticks it’ll 32GB to image your 4KB of RAM.

        You’ve described nothing about the ram. This measurement is meaningless other than telling you how detailed the imaging process is.

    • Ultimately you need only a tiny fraction of that data to emulate the human brain.

      I am curious how that conclusion was formed as we have only recently discovered many new types of functional brain cells.

      While I am not saying this is the case, that statement sounds like it was based on the “we only use 10% of our brain” myth, so that is why I am trying to get clarification.

      • Oh I’m not basing that on the 10% mumbo jumbo, just that data capture usually over captures. Distilling it down to just the bare functional essence will result in a far smaller data set. Granted, as you noted, there are new neuron types still being discovered, so what to discard is the question.

    •  eleitl   ( @eleitl@lemmy.ml ) 
      cake
      link
      fedilink
      72 months ago

      No, that captures just the neuroanatomy. Not the properties like density of ion channels, type, value of the synapse and all the things we don’t know yet.