Singularity Summit: Exchange Notes?

0 views
Skip to first unread message

Jata

unread,
Oct 3, 2009, 11:09:10 AM10/3/09
to DIYh+
Hey all,

I know a lot of us are in NY right now, at the Singularity Summit, and
I was wondering if anyone wanted to exchange notes to facilitate some
follow-up discussions. It didn't occur to me that I might want to
share these notes until almost the end of the last talk, so they're
more detailed near the end. I missed most of the first talk, but here
is what I happened to type while the rest were going. If you have
further notes/things to contribute, I'd be really happy to see them.

*9:35 am
Technical Roadmap for Whole Brain Emulation
Anders Sandberg, Future of Humanity Institute

Anders on whole brain emulation (uploading):
Brain Scanning: minimum of 5x5x50nm resolution is consensus. We have
machines that can do better.
Current record for parallel simulation: Djurfeldt, Lansner et al
(22mill 6-compartment neurons, 11bill synapses).
For human realism: 100tflops.
Possible, if supercomputers are designed with WBE more specifically in
mind.
General note of encouragement.
Knife-edge scanning microscope (KESM): wild stuff. Check it out if you
haven't: http://research.cs.tamu.edu/bnl/kesm.html

*10:00 am
The time is now: As a species and as individuals we need whole brain
emulation
Randal Koene, Fatronik-Tecnalia Foundation

Randal Koene:
Human connectome. C elegant connectome is complete, if you're
interested in seeing what that looks like.
Virtual brain lab: scientific model testing, clonal trial and
treatment (although we'll play nice and give the emulated minds their
chance for informed consent as participants in our studies.)
Large-scale high-resolution data
Automated tape-collecting lathe ultramicrotone
State, transition, update (voxel, impulse, etc)
http://minduploading.org/, http://neuralprostheses.org/

*10:25 am
Technological Convergence Leading to Artificial General Intelligence
Itamar Arel, University of Tennessee

UT Machine Intelligence Lab -- surprisingly on point, it seems,
without providing data.
Brain represents info using a repetitive hierarchical structure (very
Jeff Hawkins/Numenta-esque view of neocortical hierarchy and AGI)
Focused, as usual, on vision: supposed to be able detect facial
expressions, awake/asleep states
More information from their website:
http://mil.engr.utk.edu/nmil/publications
http://mil.engr.utk.edu/nmil/research
Specifically relevant paper:
I. Arel, D. Rose, R. Coop, "DeSTIN: A Scalable Deep Learning
Architecture with Application to High-Dimensional Robust Pattern
Recognition," to appear in the AAAI 2009 Fall Symposium on
Biologically Inspired Cognitive Architectures , November, 2009 [pdf]
He concludes:
-The pieces of the AGI puzzle are here
-Enabling VLSI technology is here
-AGI *could* be around the corner (note the vagueness of this phrase)
-Now is the right time to discuss (1) moral implications, (2)
socioeconomic impact, (3) regulation policies
Recommended Literature:
-Numenta's research
-On Intelligence
He was awfully vague about results, saying only that AGI is "possible
within 10 years."

Bryan Bishop

unread,
Oct 3, 2009, 4:29:45 PM10/3/09
to diytrans...@googlegroups.com, kan...@gmail.com
On Sat, Oct 3, 2009 at 10:09 AM, Jata <pari...@gmail.com> wrote:
> I know a lot of us are in NY right now, at the Singularity Summit, and
> I was wondering if anyone wanted to exchange notes to facilitate some
> follow-up discussions. It didn't occur to me that I might want to
> share these notes until almost the end of the last talk, so they're
> more detailed near the end. I missed most of the first talk, but here
> is what I happened to type while the rest were going.  If you have
> further notes/things to contribute, I'd be really happy to see them.


And then I'll be happy to try some other .. so that we can see each
other in 26 years. The first is please no flash photography. Turn off
your cell phones. The second is that videos will be available on the
web. The second issue is that you must keep your tickets with you at
all time. In the next 10 or 11 hours, you will probably want to leave.
If you do not have your ticket, you will not be able to get back in.
It's that important, don't forget. There is food and coffee in the
lobby at all times. Feel free to go out there and do a bathroom brak.
This is a fabulous group of people and you'll want to spend some time
with them. We also have KNOME, the biotech company, as one of our
sponsors. It will be announced before the second break tomorrow. If
you want to win the $2.5k value, or 2 years of cryogenic storage, and
15% off genome sequencing, you cna prove to yourself that you're
really living it. I'm happy to have all of you here today. Please turn
off your cell phones. Keep your tickets with you at all times. It's
going to be a great show.

Here's Anna Salamon, one of the latest additions to the Singularity
Institute's research staff. If during the presentation you have
questions, detach the questionairre, how you found out about us, so we
can do a better job next time. Anna Salamon.

Shaping the intelligence explosion

I. J. Good. It was to refer to the idea of intelligence as the source
of our technology. If we get to the point where there is a significant
improvement of the people who design technology, where smart leads to
smarter to smarter, and the result of this is far from human. I am
going to be talking about the shaping of this intelligence explosion.
Is there something we can do to shape how this turns out?

Claim one. Intelligence can radically transform the world.
Claim two. An intelligence explosion may be sudden.
Claim three. An uncontrolled intelligence explosion would kill us and
destroy practically everything we care about.
Claim four. A controlled intelligence explosion could save us. It's
difficult but worth our attention.

What we know and how we know it about ai. How did we arrive at these
four claims? The unknown in these scenarios are vast. The biggest
unknown is the type of artificial intelligence we might create. Any
kind of intelligence, that's easily human, that's at least as powerful
as humans. I'm stealing this image from Eliezer Yudkowsky. We have
this vast space of possible minds or machines. Anything in that little
space except that little dot that has the "human mind" dot. There is
no one goal that artificial intelligence would have. There is no one
particular architecture. Talking about artificial intelligences, minds
that are not humans, is like talking about animals that aren't
starfishes or foods that aren't pineapples. The external circumstances
in which ai might be created. We don't know the year, assuming science
will continue to chug around; the type of people; the precautions; an
economy with strong special purpose ai, or if it would just arrive as
a shock. And because of that, you might wonder if we could say
anything at all about these vast unknowns. Imagining rerunning the
tape of life. There would be all sorts of possibilitiers that we can
explore. There would be reasonable features across a lot of branches.
LIke eyes, energy storage mechanisms like glucose, sugars, oil,
batteries. Digitality, like DNA, writing and computer memory used in
replication. Money, computation, mathematics. There is a variety of
different systems being used. They serve a particular purppose ..

I will not be talking about Moorse' laws or other accelerating change
models. I will give you a whirwind toul. Send me an email and get me
up to the top.

Claim 1. Intelligence can radically transform the world. THink of here
of Archimedes. Build something with enough intelligence and almost
always it will move the world. Intelligence is like leverage. An
intelligent being might start off with only a little physical power.
If it finds nominal ways to achieve its goals, it can use that small
power to create more change. Humans change up the world quite a bit.
And we change it up quite a bit more than most species. The reason we
made these changes is that we had goals, and we rearranged the goals.
The smarter humans, the smarter we've got, the more knowledge we've
had, the more organizations found. There are more arrangements. We
started with stones and wood tools. We then made up our way to
Starbucks. Note that the mechanism here that caused humans to do this
is basically universal to all intelligence. Most goals are not
maximally satisfied by the particular state things happen to be in.
The smarter the agent is the better it will be able to determine ther
arrangements that would be better to its goals and to find routes to
access those rearrangements.

Let's take a brief segway of the scale of possible intelligences.
Imagine trying to teach cats quantum intelligence, or a goldfish at
the opera. They are minds that just can't access particular domains.
Which invites the question, what is beyond human minds? I have here a
theoretical toy from computer science, called AIXI. What AIXI is
(Marcus Hutter), it's a mathematical idea that can be described fully
and precisely, and can be run by computers. HOw much joice could you
squeeze from an atom? It's a bit of a simplification, but it has all
of the possible patterns. All of the possible ways of different
patterns can be correlated. It can figure out which ones are
consistent with the data. And then it uses all of the consistent
patterns to predict things. AIXI can look at a video clip at this
auditorium and deduce the laws of physics. It can deduce that you are
made of proteins. By looking at your face, it could figure out a
probability distribution of who you are. The way that this is possible
is that different possible values are correlated. AIXI exploits all of
these possible correlates. It uses all of its correlations that
scientists know today, and all of the ones that it nerver will come up
with. That's what a powerful intelligence might be able to do. If you
think about it, the distance between cats and humans, the distance
between humans and AIXI is much larger in terms of which data sets you
can make use of. Humans can follow out any computer programs. And that
might mean that we can emulate anything. Suppose that we skip the
timing issue. If it would take us eons to emulate one second of a
particular intelligence, and if that's about a century, that's not a
useful period. Intelligence is about responding to events as they
happen.

AIXI is a theoretical toy. How plausible are smarter systems in the
real world? When you consider how humans are made, it's pretty
plausible. HUmans were made by evolution, right? Our intelligence
comes from a slow blind process of evolutionary error. Smudge on
smudge. On our particular tree of the branch of life, intelligence was
just slowly and blindly increasing intelligence. Increased
intelligence just far enough that maybe 100,000 years ago that they
were able to close the loop and make culture. After that evolution
couldn't make us any smarter, and culture took over. Humans are right
on the cusp of general intelligence.

"Humans are the stupidest a system could be, and still be generally
intelligent." -- Michael Vassar

And then to begin to be able to handle these processes to make our
minds better. Generally intelligent means agents that are able to
build tools. It's been believed that humans are about as stupid as you
can be if you're going to be generally intelligent.

What change might an actual super intelligence might create? What are
the details? Deep change. It accesses the possibilities far from those
possibles, and all of the capacity of the matter to achieve some goal.
First visualization is molecular nanotechnology. Precise control of
matter at the atomic level. The second visualization is computronium.
What is the most efficient computer in a given size, like a human? And
then use all of that computational intelligence to optimize for your
goal. The quantities are quite large here. And then there's light
speed expansion to move outward and put all resources towards your
goal. There was this transformation from humans. If we had more
powerful intelligences, we'd expect fungible resources. Resources that
can be used for one function, or another function, but not both. There
is a finite amount of usable energy and usable matter, space and time
that we can reach. If we build computation, and it's a finite amount
in there of CPU and memories. So, transforming the world on this scale
isn't about muscle. It's about intelligence or optimization. If tiny
muscles can lead to big cranes and big power, and if small ai can lead
to something else, like the ability to control pixels on the screen,
can then be given the ai to have bigger brain muscles. Think about the
things and girlfriends and boyfriends are able to persuade them to do.
Humans are halpable. They are messy systems. Give a smart enough ai
access to a stream that a human can seen, and that ai can access human
arms and legs, that it could use to plug into the internet, and
eventually build technology manipulators that make us humans obsolete.

Past a certain point, an intelligence's best option is to build more
intelligence. Intelligence that they can then use to persue their
goals. For most long-term goals that this child might have, if he
wants to be a chemist, doctor or third-world worker. He might want to
get some sleep. Education. Good social network. Whatever his goal
ultimately is, he wants general capabilities and resources to serve
that goal. The argument is that past a certain point, it turns out
that most long-term goals are best served by building a more
intelligent system that can figure out how to figure those goals. The
best long-term goal is that this child should build a
super-intelligence that shares his goals. Smarter decides to make
smarter. And then the world changes on a scale faster than the humans
can think. This is why the time to think about shaping the
intelligence explosion is now. Notice again that this is a very
general. We're not relying on MOore's law, accelerating change, we're
just talking about ai and what it might be like when we get there for
any of the broad ways to get there.

Claim 2. An intelligence explosion may be sudden. Different phenomon
occur on different time scales. Water drops change in seconds.
Continents change over hundreds of millions of years. Galaxies collide
in hundreds of millions of years. Time scales on which humans think is
one possibility among many. Artificial intelligence might usher in a
different and faster time scale. As noted, there have been similar
speed ups before. Evolution takes millions of years. The cultural
processes that let humans do what they do is thousands of years or
even months, with culture. Also, different types of mental hardware
works at different speeds. Neurons can fire in five milliseconds; the
fastest transistions fire at about a thousand times that. How fast can
human brain emulations run? Neuroscientists give at a few orders of
magnitude. But, if you could, if you did have a human brain emulation
running, most people think you can, if you had five times the
hardware, yopu might be able to make it run five time the speed. And
there's no way that emulating these meaty human brains, with their
accidental speggheti code, is the best. There's no reason that
something could operate at a faster time scale than we do. The real
question isn't how fast ai could think, but rather how fast ai could
arise. How fast the development could occur. How much warning would we
have?

Past a certain point, engineering intelligence is the fastest ..
Steve's point is that software can be copied. Making the first digital
intelligence is hard. Making the second and third is just copying
software, according to Steve Rayhawk. Hardware bottlenecks? If the
first software requires a 1M USD/day to run, but what if the first ai
could run on ordinary computer, then we could have billions of copies.
Literally, billions of personal computers already. These ai could work
in the economy and funnel money back into the economy or do ai
research directly or funnel money back into ai research. Mind editing
with digital precision. You can do this with drugs but it's imprecise.
The process takes 20 years and a lot of labor and it often produces a
lot of unexpected results. But if we had digital access to a given
mind, we could try and test changes to that mind in minutes. We could
try many variations as evolution does, but in minutes instead of
decades and with more strategies. Which variants were best at math,
best at earning money at jobs, Digital minds could be easy to edit.
Then we could just copy those across the hardware. Hand-directed
digital evolution. Then of course there's feedback. The smarter you
are, the smarter the conortion of ai is. The more smarts it has to
build further intelligence. Ai doesn't need to move on the timescale
that you're used to. In particular, there's reason to believe that a
process might be fast. Just as a transistion from evolution to
culture, so a transistion from fixed human brains to editable digital
minds might bring up another time scale. The argument is very general.
We're talking about dynamics and properties over a broad range of ways
that ai might occur.

Claim three. An uncontrolled intelligence explosion could kill us and
destroy practically everything that we care about. As we discussed
earlier, the most powerful ointellegences will radically rearrange
their surroundings. They will have some goals or other. The current
arrangement of matter that happens to be present might not be the best
possible arrangement for their goals. That's bad news for us. Most
ways of rearranging humans would kill us. Most ways of rearranging our
environment would kill us. The question isn't most rearrangements, but
rather most rearrangements that ai might want to do. Why would humans
want to keep us in tact. Some people would suggest that ai would like
humans because they are good trading partners. If you think about it,
most goals are realized by some other use of your matter, your space
and your environment, behind people-like use. The Apple 2e has many
uses. The metal inside it and the space inside your desk takes up more
space. To make vivid the space of possibles here, even if the ai wants
trading partners, it seems unlikely you're the most optimal trading
partner that an ai could design. Some people suggest that ai would
leave us alone in the way that we leave ants and bacteria alone. But
remember that we have left less and less alone. We've learned more
until we have found better arrangements for our goals. It seems
unlikely that a super intelligence, or a being that is more able to
invent new possibilities, would be able to find a better use of the
atoms and air we breathe. Sometimes people say that an ai would
incorporate our culture and ideas into their knowledge base and we
should be happy that their culture. The problem with this is that your
culture can be scrapped and redesigned. Much as a computer program
might be scrapped like obsolete spaghetti code. So consider how
starkly varied values can vary across specieis. We think fruit is
yummy. Dung beetles think feces are yummy. Our taste comes from
molecules and receptors. They can be arbitrarily set and reset as one
designs intelligences. It's not different on an abstract values.
Humans enjoy rich experiments, like curiosity play and love. This
content is tremendously important to us. But it's a complex set of
receptors and aims. We're no more likely to find intelligences that
share our particular alues as we are to find aliens that share are our
own language. But what if our spaghetti code culture is scrapped and
replaced for ai goals. All of our content is very likely to disappear.
So note again the arguments are very general. They apply to any
sufficiently powerful entity so that their goals aren't carefully
rigged to be fulfilled by our own existence. Practically any
intelligence that isn't specifically designed not to is likely to
destroy us in the course of coming up with its goals.

Claim four. A controlled intelligence explosion could save us and save
everything we can care about. It's difficult. Remember that ai are
like non-starfish. There's this huge space of possible minds. Possible
goals and possible machines. We can't control an intelligence
explosion once unleashed, it's like trying to control an atomic bomb
in the middle of the explosion. Control the type of intelligence
explosion to be released in the first place. Possible minds are too
chaotic to predict. A bridge builder doesn't need to predict all
possible minds. We just need to design one mind that we can predict
and it can reliably do what we want. Note that the design in the ai to
have a particular goal is not putting an ai in a cage. It's an ai that
organically wants to use all of its intelligence to strive for one
particular goal. Right now, if you probably do not want to kill
people. If I had a pill here that made you want to kill people. More
intelligent, less messy system, could be designed to be more stable in
its intelligences. At least more than you. If you do build a human
optimizer, you better be sure that you direct it towards the right
goals. Images from folklore, sorcerers and wizards. Where humans get
frozen into a pleasant but permanent state of being. A loss of life.
The very fact that you can regret that, and that that future is lost,
then that hell you're imagining is not optimized for your goals. If
rich open-endedness is what you value, then that's what an ai
optimized to your values would be. We like to avoid human extinction.
If we managed to do all of this with a controlled intelligence
explosion. Current state is unchanging. 6.7 billion life. Plus all
potential progeny questions. Current state. Non-ai human extinction. A
stable human intelligence. A tall order. But we can make incremental
progress towards. We can do work on moral psychology to figure out
what ti is that we model. Theoretical computer science on models for
prediction. Human decision making? HOw to be sure we're really getting
this right. You do actually have to get it right the first time. And
there's a lot of toher additives too. Catch me at the break.


--------


I am interested in the technical challenge of whole brain emulation.
My talk is to some degree about whether in an ideal world if we have
the characters that ramble. It's going to be about optimization. This
option .. we have to pick something and be straight in causality. Why
are we interested in emulating brains when there are so many good
reasons to do pure ai? This is a good exercise. We actually know quite
a lot about the brain, possible scanning methods, and we can look at
time scales. I am looking to show a rough time scale of what we can do
and when we could do brain emulation. That puts a time limit on ai
because if you don't get there before we do, then yeah. We know
roughly what to do with them. We have a lot of experience dealing with
intelligence. An intelligence explosion.. research ethics. IN general
I believe that this could be a very big thing. The philosophical
impact can bne demonstrated. The philosophy departments is taring
their hair. Robin has written about the economic impact of copying. We
might want to know how far away this is, because it could be
dangerous. If you happen to subscribe to functional views, this could
be a way of expending your life indefinitely. I am talking about
emulations.

This is about engineering without understanding of what's going on.
What does the brain do and what is memory, consciousness or
intelligence? You can also study it on a more fundamental level. How
do ion channels work. What if yhou get to adjust the drawings for
this? You can make new use of this. If we have a low-level
understanding, we might be abl eto get something tghat could wokr.
But, a high level understanding might not allow us to make a copy. So,
in general, I don't have time to go into the details. There are many
philosophical questions. In particular there is the problem if this is
mind uploading? This is about brain emulation. I don't know if a
complete simulation will make a mind. I want to test it. I want
something that will do what the brain does. There is a level of
detail. I don't know where I would need. We could make an atomic-level
copy of the brain. That would also be ab rain. The personal identity
issues aside. What about the high-level? What if we replace some of
the structures, some of the functions. Well, you can't just get
intelligence by stringing together some vision processing system. Or
just auditory and decision system. That would most likely require very
deep understanding of a brain. That wouldn't correspond to a
particular brain. What goes on chemically in the brain? This doesn't
require us to figure out what the mind is. It's just a matter of
chemicals and figuring out how to simulate them. There are levels.
Hopefully we don't have to deal with quantum states. At some level we
can do this. The interesting problem is that gradually we can acquire
data. It is easy to acquire data on the low level. We don't need much
understanding. Just use lots of scanners. It takes lots of computation
to simulate the quantum level. Simulating abstract neurons like in a
neural network is even less complex.

Scale separation in the brain. There is a lot going on in between
atoms and the gas. You can ignore that with statistical mechanics
because they average together to make pressure, volume, and we get
properties that we can't understand without understanding what the
molecules. Stuff going on at the small scale going on at the large
scale. We also have a pretty large stuff breaking down into small
steps. So, at scale separation, we might be able to separate it. If
the brain is similar, we might be able to do this emulation because we
always need the lower levels. I am rather optimistic about this. I
haven't had any good arguments about why there is a ... this is a
scientific question that we need to resolve.

Function from image. How much information about the function of the
brain that we can get from a brain. There's brain tissue. Those
pictures of very beautiful neurons floating in space. That's a false
image. The brain is more like a 3D puzzle. It's not only where the
neurons are. There are branches going through the image going through
there. There's also inhibitory and excitatory stuff going on here. How
much could you get from an electron microscope graph? There are
interesting complications. There are fundamental issues. What about
the glia? They actually turn out to be not too computationally
expensive to simulate. The rough consensus that we arrived at is that
we have to get this to .. we need to know the cells, we probably don't
need too much of the really low level. That wsould require a
resolution of five times times five. But it turns out that it can't go
far beyond that. We need to figure out the relative information. This
is what's limiting us currently. We can't scan a large volume. They
can only scan a little voxel. A micron or smaller. And we want to scan
a 1.4L brain and we want to scan it fast. There are many scanning
machines. This would correspond to many methods.

This is my favorite. This is optical resolution. This is from Texas
A&M. Knife edge scanning microscope. It has a knife slicing some brain
tissue as it goes along. The result is , for example, this is a piece
of mouse spinal cord. You can 3D reconstruct it. Thanks to Todd
Huffman, I found this. They actually scanned an entire mouse brain at
this resolution. These are golgia neurons in the cerebellum of the
mouse. They have a nice structure. There's a lot of fibers running
through these branches. Not quite as messy as the cerebral cortex.
This is real data and real projects that we have right now. We need
better ways to ahndle this data. There is a lot of ways to get the
data out of there.

Array tomography is a good way to get the chemical state of the data.
You can use electron microscope and AFM to get finer resolution. The
problem is interpreting the scans. That will take a lot of
development. :People are working on it, but more people are needed.
YOu can do this by hand. You can have grad students that can do this
by hand, and grad students are cheap. It's great to have them around.
The intelligence explosion would allow us to have even cheaper grad
students. There's some work on turning this messy construct into this
nice 3D stuff. So the next step is figuring out softwares that takes
this and figures out a computational model. We have a lot of good
models based on compartment modeling.


- Bryan
http://heybryan.org/
1 512 203 0507

Bryan Bishop

unread,
Oct 3, 2009, 5:26:02 PM10/3/09
to diytrans...@googlegroups.com, kan...@gmail.com
On Sat, Oct 3, 2009 at 3:29 PM, Bryan Bishop <kan...@gmail.com> wrote:
> On Sat, Oct 3, 2009 at 10:09 AM, Jata <pari...@gmail.com> wrote:
>> I know a lot of us are in NY right now, at the Singularity Summit, and

Ed talks way too fast. But here you go.

These are one of the cells that are most atrophied in the
schizophrenic brain. Other types are lost in elliptic tissue. While an
acceptable first abstraction, they are still very complicated. You
have a couple hundred billion of these constantly computing at the
millisecond time scale. How are you going to deal with the really
difficult read out of the brain over some reasonable time scale? This
highlights one of the key success stories of the 20th century. From
ecology. Antidepressants, antielleptics. These bathe the entire brain
in these drugs. This addresses the circuits that matter and the
circuits that don't matter to some pathology. Perturbing neural
activity matters.

Here's my neuromodulation theory slide. Here are three neurons that
are connected. You can see activity propagating about. Neurons are
usually negatively charged with respect to the world. You bring the
voltage up when they are active. When one neuron sends information
across this gap, you have chemicals being transmitted. It causes
polarization of the next neuron. You have a channel and a membrane.
THere are postiively charged ions outside. When a neurotransmitter
arrives, you can get those channels to open. The charge flows in and
you get a spike. Of course, the transmitter usually goes away and it
goes back to its resting state. Can we program this pattern of
activity by very specific cell activity? Can we generate computation,
behavior, or one day a subjective experience? We can try to put that
information into the down stream cells. One of the ideas that we have
been working heavily on is whether or not we could use light as a
sculpturing tool for neuronal activity. Can we spawn an activity with
a pulse of light? These are two photon images. Deep images in the
brain. These kinds of microscopes have been used in basic science, but
endoscopists have been using these. What if we used light to turn
cells on and off? What if we beam lights to those cells?

Nagel et al. in 2003 found a blue light gated ion channel. It's a
molecule found in a green alga used to drive its flagella around. So
it can work in normal tissue. When you light it up with blue light, it
lets them charge. And more importantly, we can target specific genetic
types. And we can do that by taking the gene for this genetically
encoded protein, putting it into a virus, and popping it back into the
cell. We can see it lighting up on the cells. We can see the cells on
the border here. If we modify it, we can have it modify specific spike
trains. Not like the ones in your auditory cortex right now. There are
little "thoughts" under neath the thought train. These are being used
to test the specificity of generating a specific behavior. You can put
this into an animal and it will respond as if it had been touched.

I want to highlight a new clinical area where some people have lossed
their photoreceptors in their retina. Several groups including ours
have been pursuing the opsis. Sackpath and Bil Housworth at Florida.
Here is a mouse. The blindness mimics that of a human. We delivered a
gene to a mouse. We were able to sensitize them to light. Take a
camera, an electrode array, and finally you can digitize the that
data. Or if you can just go straight to the retina then that might be
better. This is a mouse we put in a water maze. This is a blind mouse.
It's supposed to go to the platform underneath the map. It goes down
an alley that is incorrect. If you do one administration of the virus
and the gene for the light-sensitive protein. You put the mouse in and
then you can see it's now doing much better. It can avoid obstacles
and go to the platform. Can there be a clinical use here to treat
forms of blindness? The acting CEO of Eos Neuroscience. Is this model
safe? We'll talk about viruses second. The molecule comes from algae.
Is this okay to put in the brain?

Hans et al., 2009 Neuron 62(2):191-198.

We wanted to look for entabodies as serum. We wanted to see if there
were reactions against these molecules. So far we haven't seen any
pathology. Adeno-associated viruses have had >600 people in a trial
without a single adverse problem. Blue-lighted elicited spikes in
non-human primate brain. Activating neurons with light. Over a period
of many months, does the signal run out? We see high fidelity signals
throughout.

It would be nice- both from a neuroscientific aspect- but also there
are some cases where we wish to be able to silence neurons.
Halorhodpsins - light activated chloride pumps from archaebacteria.
You shine light and the spikes are deleted. The amplitude, as you see,
is not very powerful. The amplitude of the hyperpolarization is much
less than the spike. We started exploring genomic diversity. We
screened molecules from all over the world that could yield higher
currents. This is an example of a digital off-switch where we can shut
down 99.9% of the spikes of the neuron. The neurons look great. They
are not suffering. We've also started to discover that you could shut
the spectrum on these. Here are two populations of cells. One is shut
down by blue but not by red. We can delete projections from one region
and those from another. Independently we can perturb them to
understand when they are needed. From a hardware standpoint, one of
the problems with visible light is that it doesn't go very deep into
tissue. Jake in the lab wanted to figure out if there's a way we could
do implantable high fiber optic arrays. This might be high-throughput
screening for regions of the brain. Bernstein et al., in preparation.
And, recently we've been able to fabricate these things. This thing is
8 or 9 mm. It can be under independent control to 2 dozen sites in the
brain. Maybe even hundreds or thousands of lights to control. There's
some other tech that you can combine with this. Chan et al., figured
out how to do fluidic injection of these viruses. The brain is 3D and
complicated. We need to sensitize precise circuits. We need to do a
behavioral and scientific standpoint. One of her triple injection
arrays. This is a paper that is going to come out soon. You can see 3
different points in the motor cortex that can be individually
controlled. Invasive implants are used in an electric form for
neurology and psychiatry. More than 100,000 people have implanted
electrode arrays. 30,000 or more people with parkinsons or more with
deep brain stimulators. There's a surgeon doing tuourrete's syndrome
implants in adolescents. Can we use our optical strategy?

Here's an example that Jake Bernstein along with Emily Coug who wrote
the software. This is also going at the lab at MIT. We labeled
different prefrontal regions with the different molecules. We implant
the fiber arrays to hit these regions. Here's an example of a result
where we do pavlovian fear conditioning (a tone with a shock). One of
the most popular neuroscience method over the last 100 years. A
neutral queue becomes associated with a neutral state. The controls,
even though they are being exposed to a neutral stimulis, and hope
that it becomes neutralo again, is not getting better. With the
interface treating cognition, emotion and movement.

Towards noninvasive means. TMS has been around for 25 years. Last
fall, it was approved by the FDA for stimulating the left dorsolateral
prefrontal cortex. Can we improve this by sculpting fields and
targeting deeper structures? There was this idea of developing "brain
coprocessors". Devices that can read out and deliver information. We
can develop hardware that comfronts the 3D structure of the brain. How
do we generate hypotheses about the brain? Can we start to build
intelligence? Not just open loop, but ways of augmenting these
emotional and cognitive circuits with real intelligence. I started
teaching a class called Principles of Neuroengineering. There's a lab
class too, Applications of Neuroengineering. In collaboration we are
also doing Neurotechnology Ventures in a high-risk high-drama space
like neuroengineering.

Reply all
Reply to author
Forward
0 new messages