[cs462][COAL] Neural Networks

24 views
Skip to first unread message

Taylor Alexander Brown

unread,
Jan 10, 2017, 3:54:43 AM1/10/17
to Coal-capstone
These lectures from MIT OpenCourseWare provide a good background on
neural networks:

https://www.youtube.com/watch?v=uXt8qF2Zzfo&index=12&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi

https://www.youtube.com/watch?v=VrMHA3yX_QI&index=13&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi

Basically, neural networks represent the flow of information from a set
of inputs (e.g., pixels in an image or spectra in a "hyperpixel")
through a network connected by basic mathematical functions such as
multipliers, adders, and thresholds, to a set of outputs representing
classifications (e.g., typographic symbols or land surface types).

Neural networks are initialized randomly and are not useful until they
are trained. Neural nets are trained with sample data by repeatedly
comparing classifications with known values and adjusting weights and
thresholds to minimize error. Training neural networks with an
exponentially-growing number of connections is computationally feasible
because redundant computations can be reused. A trained neural network
encodes nonobvious generalizations about each type of object it is
trained to recognize.

A well-trained neural network can classify novel data with "pretty good"
accuracy. The size of training data and the structure of the network may
affect accuracy. The output of a trained neural network is effectively a
set of probabilities that the input belongs to each type. Classification
may return only the type with the highest probability, a set of types
with a probability above some cutoff, or some other way of analysis of
the results.

Neural networks are a good candidate for our project because the
generalizations we seek to make are nonobvious and would be problematic
to formulate in a set of probabilistic rules. Using neural networks, our
project becomes a problem of acquiring and classifying sample data,
structuring the networks, and training them to recognize the classes we
are looking for.

The lectures describe the notion of "pooling", using different stages of
a neural network to encapsulate different features of the data. Although
this was discussed in the context of a single network, I see an analogy
with the way we have broken our pipeline into stages. Developing the
stages as separate modules simplifies each problem and provides the
possibility of reusability.

As an example of reusability, I mentioned it would be interesting to see
what ESA is doing. I assume they and everyone else doing spectroscopy
have unique and incompatible data formats. Unless their formats can be
translated to ours, the initial mineral identification stage will only
be compatible with AVIRIS data. However, mining identification relies
only on mineral-classified images which could be generated by our
algorithm or anyone else's, so that step onwards could be potentially
reused by other teams.

We should consider distributing not only the software algorithms but
also the trained data which can be applied by other teams to unique data
sets. During the break I developed an application [1] that uses an
optical character recognition library to read the user's handwriting. I
was able to reuse and redistribute a trained data set without doing any
training of my own. Of course, any trained data that we redistribute
would only be useful to people looking for the same things we are
looking for. A weakness of machine learning and neural networks in
particular is that the whole system has to be retrained if you change
what you are looking for.

The lectures didn't define the term backpropagation, but looking at the
Wikipedia article [2] it seems like basically what was discussed. If
like me you are inexperienced with neural networks, and you have two
hours to spare, I recommend watching the lectures in full.


Taylor


[1] https://github.com/browtayl/zijiao
[2] https://en.wikipedia.org/wiki/Backpropagation

Brown, Taylor Alexander

unread,
Jan 11, 2017, 5:45:49 AM1/11/17
to Coal-capstone
So the neural network lecture has shed new light on some of the questions we have been kicking around. The lectures are part of a full-course sequence in artificial intelligence, so there may be some other gems which illuminate alternatives to neural networks that we may decide to use. Although at this point it sounds like our hypothesis is pretty much that "neural networks can recognize the data we are interested in" so perhaps we don't really need to scour the rest of the machine-learning literature. It sounds like a pretty good intuition, especially if we get our hands on some powerful computers.

Assuming that our goal is to classify every hyperpixel in an image, then I do not believe it would be necessary to load the entire image into memory. It should be enough to read one hyperpixel at a time from an image into memory, classify it using a trained classifier, and write a classified pixel to an output file. This begs the question what format to use as the output of mineral classification (and the input of mining identification). Obviously a lossless bitmap format of some kind, and one that can store enough data per pixel to represent the number of classes we are interested in. It would be convenient if each class could be represented by some human-meaningful color, though the colorspace could be defined either as a parameter to the image processing procedure or remapped on its output.

These comments are based on the assumption that each hyperpixel has been classified into a single mineral type, that is, each hyperpixel was classified as the one most likely choice out of a set of possible alternatives. It would be far more complicated to adopt a probabilistic approach where each hyperpixel could be associated with a probability of belonging to each type. Given the apparently youthful state of the literature on using neural nets to process hyperspectral images, I would be surprised if anyone has done that before, which may be worth looking into. Passing all this extra data down the pipeline might help out the mining classifier, although it would be far less convenient to work with an ad hoc data format and it is not clear that it would really be necessary. If we do go with discrete mineral classification, and we find that the results down the line aren't accurate enough, then we could use a continuous probability approach or suggest it as a direction for further research.

I am also assuming that we are interested in classifying each pixel rather than, say, groups of pixels or subsets of pixels. Perhaps our data is such that it would be more accurate to pass in a square of hyperpixels rather than just one, but unless there is research to the contrary it is not obvious that this more complex approach would be any better. Again, if we have problems with accuracy we could identify some sort of pooling as an alternative or direction for further research.

All this talk about the algorithm overlooks the initial training stage, which seems to be a separate problem. Users of our application should have the option of using data that we have trained as well as generating training data of their own. We should consider redistributing our data in some useful way. During my project over the break I got acquainted with GitHub's Release feature [1] which allows you to upload large binary files such as executables or data files. We could use something like this if we don't want to store blobs in our version control system. I don't have a feel yet for how big a trained neural network, which ends up being used as a black-box classifier function, needs to be for our purposes. It was interesting to note from the lectures that once trained, a neural network can have most of its neurons removed and still work pretty well or be retrained with minimal time. Anyways, part of our development time for each stage will involve structuring and training a neural net, not only implementing the algorithm that uses it, and I wouldn't be surprised if the training takes longer than the programming. We will have to research how big and how complex a neural net has to be to solve each problem. It would also be nice if we could redistribute the algorithm we use to train the data, not just the trained data itself.

Last term it was asked whether we will be using supervised or unsupervised classification. The lectures described mainly supervised learning by backpropagation, but neural networks can be applied to both. However it looks like [2] classification a.k.a. pattern recognition is considered to be a supervised learning task. This makes sense if we have a finite set of classes (mineral types, is a mine/is not a mine, is impacted/is not impacted) which we know (from the usgs library or whatever) and train the network to recognize up to some arbitrary level of accuracy. Feel free to find counterexamples.

So one way of looking at the first part of our project is that we are implementing a new algorithm to complement the other supervised classification algorithms [3] that are provided by the spectral python library. If we wrote a good neural-net-based classifier it would certainly be a good candidate for submission to that library, although it might also make sense to maintain it as our own module. Our first priority should be making a classification pipeline that works for us, and if we can maintain compatibility with these other libraries then all the better.

In one of our hangouts I suggested the notion of passing in a mineral identification function for reusability. At that time I had no understanding of neural networks but I was strongly influenced by the functional programming practice of passing functions as arguments. Now that I see a trained neural network as just a black-box classifier, the function-passing approach seems that much better suited. I mean, it will depend on our machine learning libraries how a trained net is represented, but the bottom line is that (for mineral identification anyways) it is some kind of function from a hyperpixel to a set of probabilities.

So in a bit of pseudocode, this is what I see our mineral classification algorithm shaping up to be:

## to train a neural network,
## take as input a set of hyperpixels, manual classifications for each, and desired accuracy,
## use some neural network library with preset or customizable cost functions, weight functions, etc.,
## and produce as output a trained neural network for use as a black box function.

# enumerate minerals
enum mineral_x, mineral_y, mineral_z

# data structure representing a set of hyperpixels and classifications
classifiedHyperpixels = [(pixel_0, mineral_x), (pixel_1, mineral_y), ..., (pixel_n, mineral_z)]

# some arbitrary value
desiredAccuracy = 0.99

# define trainer using some library
function coal.mineral.train(trainingData, accuracy)
    trainedNetwork = some.neural.net.library.trainer(trainingData, accuracy, otherParameters)
    return trainedNetwork

# generate a neural network trained to classify minerals
mineralClassifier = coal.mineral.train(classifiedHyperpixels, desiredAccuracy)

# to classify each hyperpixel in an input image,
# read and decode each hyperpixel from the input image,
# use a trained neural network to classify each hyperpixel as a particular mineral,
# and write each mineral classified pixel to an output image.

# define procedure that maps a mineral classifer over an input image onto an output image
function coal.mineral.processImage(inputImage, classifier, outputImage)
    while (inputImage != EOF)
        hyperpixel = coal.format.aviris.getHyperpixel(inputImage)
        classifiedPixel = coal.mineral.classify(hyperpixel, classifier)
        write(classifiedPixel, outputImage)

# turn a hyperspectral image into a mineral classified image
mineralClassifiedImage = coal.mineral.processImage(avirisData, mineralClassifier, mineralMap)

I should note that although I really like the term hyperpixel and would advocate for its use, some spectrometer vendor [4] asserts an unregistered trademark on the terms "HyperPixel" and "HyperPixels". However they did not assert it on the lowercase term "hyperpixel" in their research paper [5], so it's sort of ambiguous, and arguably legally grey to begin with. Maybe there is a more formal, if not more catchy, term in the literature. Like the "coal" package in PyPi [6], it is something we might have to deal with.

Anyways, this is where I'm at in thinking about the problem. Feedback is welcome.


Taylor


[1] https://help.github.com/articles/about-releases/
[2] https://en.wikipedia.org/wiki/Artificial_neural_network#Learning_paradigms
[3] http://www.spectralpython.net/algorithms.html#supervised-classification
[4] http://www.bodkindesign.com/products-page/hyperspectral-imaging/hyperspectral-products/
[5] http://www.bodkindesign.com/wp-content/uploads/2012/03/SPIE-DS-2009-HPA-7334-17-approved-compressed.pdf
[6] https://pypi.python.org/pypi/coal/0.0.2

Taylor Alexander Brown

unread,
Jan 12, 2017, 7:32:49 AM1/12/17
to Coal-capstone
StackOverflow shuts down questions where people ask for opinions about
the best software, but I find that these often contain the most useful
recommendations. Here is a big list of neural network libraries for
various platforms:

http://stackoverflow.com/questions/11477145/open-source-neural-network-library/11477815#11477815

It mentions Caffe and TensorFlow as well as many others. Obviously it
wouldn't hurt to consult other lists as well. Our selection will have to
depend on the language and platform and may be influenced by performance
considerations. If we get access to a fast computer, we want to be able
to take full advantage of its capabilities such as vectorized
operations. We will also have to mind licensing issues.

On the issue of licensing, if we incorporate multiple codebases we might
want to take a look at the format Debian uses to denote licensing on a
per file basis:

https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/

An example from one of my projects:

https://github.com/browtayl/zijiao/blob/master/license.txt

Here is a brief paper by someone who did sensitivity analysis on AVIRIS
bands using neural networks:

http://www.aaai.org/Papers/FLAIRS/1999/FLAIRS99-057.pdf

Not directly related to our project, but a worthwhile example of one
person's approach to structuring a relatively small net to solve
problems with the same kind of data we are using.

I'm guessing that our neural net will want to take as input all 224
layers of a pixel. How big is a pixel layer? My understanding is that
neural nets take binary rather than continuous inputs, so we will need
to find some mapping from a pixel layer value to a bitvector. I can see
the complexity exploding however depending on how big the value is. If
we have a 32-bit integer for example, we are looking at a network with
224*32 inputs and I don't know how many links. More things to think about.


Taylor
> <https://www.youtube.com/watch?v=uXt8qF2Zzfo&index=12&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi>
>
> https://www.youtube.com/watch?v=VrMHA3yX_QI&index=13&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi
> <https://en.wikipedia.org/wiki/Backpropagation>
>
>

Taylor Alexander Brown

unread,
Jan 12, 2017, 4:05:10 PM1/12/17
to Coal-capstone
So based on the file data/aviris_classic/111013_AV_Download.readme in
Google Drive, it looks like the radiance of each spectrum in a pixel is
stored as a signed 16-bit integer.

We will have to determine whether 224*16 is too many inputs to a neural
net for our purposes, in which case we might be able to scale the
integer down to 8 or 4 bits if necessary. My intuition is that scaling
wouldn't hurt the accuracy of the network, but at this point I have no
evidence to support this.

We will need to think about practical performance of both the training
stage (as a function of the number of training samples) as well as the
classification stage (as a function of the number of pixels in an image,
and the number of images we are interested in processing). I believe
both steps would benefit from number-crunching hardware e.g., GPUs.

One other thing we will have to take care of before we configure the net
is doing any preprocessing of the data. In other words, it seems like we
will need a preprocessor function to map the source image to the image
we actually feed to the net. This step shouldn't take much memory since
it is a pixelwise operation, and I assume it will be much faster than
running a net. So building off of the previous pseudocode:

# to prepare a source image to be an input to coal.mineral.processImage,
# read each pixel from the source image into memory,
# preprocess it to account for variance (?) etc.,
# scale each value by 2^{-n} if necessary,
# and write each preprocessed pixel to an output file.
function coal.format.aviris.preprocessImage(sourceImage, outImage)
while (sourceImage != EOF)
pixel = coal.format.aviris.getPixel(sourceImage)
for (layer in pixel)
preprocessedLayer = preprocess(layer)
preprocessedPixel[i] = preprocessedLayer
write(preprocessedPixel, outImage)

# preprocess an image in preparation for pixel classification
coal.format.aviris.preprocessImage(rawImage, avirisData)

By the way, someone stop me if it seems like I'm going down the wrong
rabbithole here. The neural net approach seems promising because we
programmers actually don't have to know too much about the meaning of
the data, just what the inputs and outputs have to be, and the net will
take care of the rest. However, I'm still thinking about these things at
a more abstract level, so we'll need to drill down taking advantage of
other teams' work as much as possible. I think we specifically need to
research work that has been done with neural nets on AVIRIS data to get
a feel for the kind of structure that has worked in the past.


Taylor

Heidi Ann Clayton

unread,
Jan 12, 2017, 7:05:22 PM1/12/17
to Coal-capstone, Taylor Alexander Brown

Hi Taylor,
Sorry if it seems like I'm ignoring these. I'll have more potential questions, insights, or ideas once we talk in person.

Heidi

--
You received this message because you are subscribed to the Google Groups "COAL - Coal and Open-pit surface mining impacts on American Lands" group.
To unsubscribe from this group and stop receiving emails from it, send an email to coal-capston...@googlegroups.com.
To post to this group, send email to coal-c...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/coal-capstone/b0197560-3bb8-580e-8a10-cd75fc6909ff%40oregonstate.edu.
For more options, visit https://groups.google.com/d/optout.

Taylor Alexander Brown

unread,
Jan 13, 2017, 3:25:38 AM1/13/17
to Heidi Ann Clayton, Coal-capstone
No worries, I've just been posting stuff as I find it and replying to
myself because I often find it helpful to reason in writing. Our meeting
should give us the opportunity to come at it from another angle and
recollect what we do know and don't know at this point.


Taylor

Taylor Alexander Brown

unread,
Jan 13, 2017, 5:54:29 AM1/13/17
to Coal-capstone
Attached is a sketch of the neural network approach I have described.


Taylor
nn-sketch.pdf

Taylor Alexander Brown

unread,
Jan 13, 2017, 7:42:37 AM1/13/17
to Coal-capstone
And here is a totally speculative sketch of a network for mining
identification.


Taylor
nn-sketch-2.pdf

Taylor Alexander Brown

unread,
Jan 13, 2017, 6:38:06 PM1/13/17
to Coal-capstone
A nice tutorial example of NN training and classification with Caffe
we've been discussing:

http://caffe.berkeleyvision.org/gathered/examples/imagenet.html

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb


Taylor

Taylor Alexander Brown

unread,
Jan 14, 2017, 2:19:34 AM1/14/17
to Coal-capstone
To summarize, Heidi, Xiaomei, and I had a great meeting today. Weather
in the Northwest has been messy so the term began a little more slowly
(and snowily) than usual. We spent a lot of time today reasoning about
the neural networks approach as well as drilling down into some of the
data formats and looking over some tutorials.

We came with a fairly shaky and abstract understanding but I think have
much more confidence now that what we're interested in really is doable
with the "magic" of neural nets. My biggest concern is in our computing
capacity to train and execute these nets.

The team agreed that the sketches I made appear to have been on the
right track, although many details needed to be filled in. Right now our
goal is to keep learning and start doing some exploratory programming,
such as starting off with some sort of "hello world" neural network. The
Caffe tutorial I posted in the Neural Net thread is a very good example
which we looked over, though we agreed we would like to try something a
little closer to the format of our problem. That seems like a good
library to use. I have access to an unused GPU machine (an old gaming
PC) which I might use to try some of the examples.

In order to "Parameterize" our networks we will need to look for other
examples of applying NNs to AVIRIS data. We would like to be able to
"steal" the topology of the network if we can in order to save time on
trial and error. A literature search for "AVIRIS neural network" is
bound to bring up most of what we are looking for.

In order to "Train" the mineral ID stage we need to work with the USGS
digital spectral library, so we took a closer look at the data formats.
The library contains ~6000 samples with classifications. There are
multiple samples for many of the minerals and other surface types, so we
should be able to take advantage of redundancy in training the net. We
may have to write a little regular expression to ensure that, for
example, we group "Actinolite HS116.3B" with "Actinolite NMNHR16485".
Variations in the observations will end up shaping the threshold
functions inside the net, but we don't have to worry too much about that
since the library will take care of it.

As feared, the USGS digital spectral data is not in the same format as
each AVIRIS pixel. The format is documented here:

https://speclab.cr.usgs.gov/specpr-format.html

Each sample has 256 or more "layers". AVIRIS pixels have only 224, so we
will have to think about filtering the data points. One approach is a
simple filter that chooses the first datapoint corresponding to each
AVIRIS layer and discards the rest. Or we could choose the median
datapoint for each, rather than the first. Another approach is to
average all of the data points for a corresponding layer, although the
implication of this is not clear. Perhaps there are other approaches as
well.

The radiance for each wavelength is represented by USGS as a 32-bit IEEE
floating-point number. AVIRIS measures these using 16-bit integers, so
we will have to map a conversion over the library to prepare it for
input to the trainer.

Although it would make sense to see what performance we get training and
running the net on a full-sized pixel, we want to throw away as much
useless data as we can in order to save precious processor cycles. Can
we throw away some of the layers, for example those outside the visible
light spectrum? To the NN it shouldn't really matter, but the more data
we have the better its judgments can be. Can we scale down our numbers
from, say, 16 to 8 or fewer bits? The trailing digits of the ratios we
are looking at may not be all the influential, but we don't know that yet.

We would like further information on how to "Preprocess" our raw AVIRIS
data. A pixelwise, layerwise spectral scaling routine wouldn't be too
hard to implement or too costly to execute, but we need to figure out
the parameters. After all of this legwork, the classification should be
a matter of running the net and waiting for it to finish. We do want to
be mindful of the output format since if we throw away too much data the
next mining ID stage won't have enough to work on. Do we want output
values to be yes/no answers or do we want probabilities, which we get
for free?

As far as mining ID goes, answering a few preliminary questions will
influence the design of the second NN. Namely, what features in general
characterize a mine? I assume in my sketch that we are interested now in
groups of pixels. How wide should these groups be? That is, what are the
physical dimensions of surface weathering gradients for example, and how
does this compare to the resolution of our images? We will want to
classify neighborhoods of pixels that are big enough to capture
interesting features, but small enough not to explode in complexity.

What do we do if we have "bad" data? If a pixel comes in with snow
cover, or unrecoverable noise, or whatever, are we safe throwing it
away? I have a certain amount of faith that the net could be trained to
ignore these, but we will still want a representation for holes in our
data, like NULL.

This covers the main points we discussed in our meeting. Heidi was right
that talking it over provided some unique insights and questions. All of
us agreed to be proactive and chime in about what we're working on,
whether that be data formats or neural nets or web development. We plan
to meet again next Friday, although we haven't reserved a space yet.


Have a Happy Martin Luther King Jr. Day weekend,

Taylor

Lewis John Mcgibbney

unread,
Jan 26, 2017, 12:21:07 PM1/26/17
to Taylor Alexander Brown, Coal-capstone
Loads here.
Let's also try and discuss tomorrow if possible.
Lewis

>>>>>     send an email to coal-capstone+unsubscribe@googlegroups.com.

>>>>>     To post to this group, send email to coal-c...@googlegroups.com.
>>>>>     To view this discussion on the web visit
>>>>>     https://groups.google.com/d/msgid/coal-capstone/b0197560-3bb8-580e-8a10-cd75fc6909ff%40oregonstate.edu.
>>>>>
>>>>>     For more options, visit https://groups.google.com/d/optout.
>>>>>

--
You received this message because you are subscribed to the Google Groups "COAL - Coal and Open-pit surface mining impacts on American Lands" group.
To unsubscribe from this group and stop receiving emails from it, send an email to coal-capstone+unsubscribe@googlegroups.com.

To post to this group, send email to coal-c...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Lewis
Dr. Lewis J. McGibbney Ph.D, B.Sc
Director, MCMA Associates
Skype: lewis.john.mcgibbney

browtayl

unread,
Jan 28, 2017, 2:12:54 AM1/28/17
to COAL - Coal and Open-pit surface mining impacts on American Lands, brow...@oregonstate.edu
Let us know if we can provide any further detail about neural nets that we didn't get to during our meeting.

I'm thinking we might want some background on neural networks in the wiki, in which case I could revise some of my emails to post there.


Taylor

Lewis John Mcgibbney

unread,
Jan 28, 2017, 3:25:05 PM1/28/17
to browtayl, COAL - Coal and Open-pit surface mining impacts on American Lands
Yes documentation on this is always welcome.
Can you all please read through the following document
https://drive.google.com/open?id=0B1hXUEnU66LkS2pGaUVZaDNSNXM
As far as I know, this work represents state-of-the-art in classification for hyperspectral data. Let's keep the discussion going once you've read that.
Lewis

--
You received this message because you are subscribed to the Google Groups "COAL - Coal and Open-pit surface mining impacts on American Lands" group.
To unsubscribe from this group and stop receiving emails from it, send an email to coal-capstone+unsubscribe@googlegroups.com.
To post to this group, send email to coal-c...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Taylor Alexander Brown

unread,
Feb 2, 2017, 3:42:41 PM2/2/17
to Lewis John Mcgibbney, COAL - Coal and Open-pit surface mining impacts on American Lands
A lot of familiar (and unfamiliar) concepts in that slideshow, namely
feature reduction, classification, and retraining. I notice they are
interested in "nearest neighbor" classification strategies. Are neural
networks still our go-to classifier? In a sense it doesn't matter that
much, just unplug one type of classifier and plug in another, but the
devil may be in the details.

I think the biggest question is which strategy is best suited to our
spectral libraries. We have a relatively small number of high-quality
samples per class. So far I have assumed this would be enough to train a
perceptron classifier, but others might have different characteristics.

I have been a bit wary of the active learning approach described,
because it seems like errors could accumulate if we use the classifier
to generate classified data which we then feed back to retrain the
classifier. Put another way, if we identify a pixel as being granite
with 80% confidence, and then we combine that with spectral library
samples that have 100% confidence, we are increasing our sample size but
decreasing our accuracy. On the other hand, the slides show some good
results, so maybe the effect is limited. It is worth noting that we can
always "retrain" our classifiers using secondary data after the fact
without having to do active learning on every pixel, which I suspect
would have severe performance implications.

Thoughts?


Taylor
> send an email to coal-capston...@googlegroups.com
> <mailto:coal-capston...@googlegroups.com>.
> To post to this group, send email to coal-c...@googlegroups.com
> <mailto:coal-c...@googlegroups.com>.
> <https://groups.google.com/d/msgid/coal-capstone/b62ef51f-11ab-40ed-9185-d25a4d375898%40googlegroups.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
>
>
> --
> /Lewis
> /
> Dr. Lewis J. McGibbney Ph.D, B.Sc
> Director, MCMA Associates
> Phone: +1(626)498-3090
> Skype: lewis.john.mcgibbney
> Email: lewis.m...@gmail.com <mailto:lewis.m...@gmail.com>
>
>
Reply all
Reply to author
Forward
0 new messages