That’s capturing everything. Ultimately you need only a tiny fraction of that data to emulate the human brain.
Numenta is working on a brain model to create functional sections of the brain. Their approach is different though. They are trying to understand the components and how they work together and not just aggregating vast amounts of data.
You find a computer from 1990. You take a picture (image) of the 1KB memory chip which is on a RAM stick, there are 4 RAM sticks. You are using a DSLR camera. Your image in RAW comes out at 1GB. You project because there’s 8 chips per stick, and 4 sticks it’ll 32GB to image your 4KB of RAM.
You’ve described nothing about the ram. This measurement is meaningless other than telling you how detailed the imaging process is.
Ultimately you need only a tiny fraction of that data to emulate the human brain.
I am curious how that conclusion was formed as we have only recently discovered many new types of functional brain cells.
While I am not saying this is the case, that statement sounds like it was based on the “we only use 10% of our brain” myth, so that is why I am trying to get clarification.
They took imaging scans, I just took a picture of a 1MB memory chip and omg my picture is 4GB in RAW. That RAM the chip was on could take dozens of GB!
Oh I’m not basing that on the 10% mumbo jumbo, just that data capture usually over captures. Distilling it down to just the bare functional essence will result in a far smaller data set. Granted, as you noted, there are new neuron types still being discovered, so what to discard is the question.
No, that captures just the neuroanatomy. Not the properties like density of ion channels, type, value of the synapse and all the things we don’t know yet.
Given the prevalence of intelligence in nature using vastly different neurons I’m not sure if you even need to have an exact emulation of the real thing to achieve the same result.
That’s capturing everything. Ultimately you need only a tiny fraction of that data to emulate the human brain.
Numenta is working on a brain model to create functional sections of the brain. Their approach is different though. They are trying to understand the components and how they work together and not just aggregating vast amounts of data.
No it does not. It captures only the physical structures. There’s also chemical and electrical state that’s missing.
Think of this:
You find a computer from 1990. You take a picture (image) of the 1KB memory chip which is on a RAM stick, there are 4 RAM sticks. You are using a DSLR camera. Your image in RAW comes out at 1GB. You project because there’s 8 chips per stick, and 4 sticks it’ll 32GB to image your 4KB of RAM.
You’ve described nothing about the ram. This measurement is meaningless other than telling you how detailed the imaging process is.
I am curious how that conclusion was formed as we have only recently discovered many new types of functional brain cells.
While I am not saying this is the case, that statement sounds like it was based on the “we only use 10% of our brain” myth, so that is why I am trying to get clarification.
They took imaging scans, I just took a picture of a 1MB memory chip and omg my picture is 4GB in RAW. That RAM the chip was on could take dozens of GB!
Oh I’m not basing that on the 10% mumbo jumbo, just that data capture usually over captures. Distilling it down to just the bare functional essence will result in a far smaller data set. Granted, as you noted, there are new neuron types still being discovered, so what to discard is the question.
No, that captures just the neuroanatomy. Not the properties like density of ion channels, type, value of the synapse and all the things we don’t know yet.
I don’t think any simplified model can work EXACTLY like the real thing. Ask rocket scientists
Fortunately it doesn’t have to be exactly like the real thing to be useful. Just ask machine learning scientists.
Given the prevalence of intelligence in nature using vastly different neurons I’m not sure if you even need to have an exact emulation of the real thing to achieve the same result.
i mean they probably use vast amounts of data to learn how it all works.