Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lisp and statistics for 21. century: some articles, books and software

39 views
Skip to first unread message

Emre Sevinc

unread,
Sep 6, 2009, 3:37:11 AM9/6/09
to
As I was browsing the blog of Incanter [1], a statistical computing
and graphics environment for the JVM, I realized a couple of Lisp
related statistics articles and software:

* Back to the Future: Lisp as a Base for a Statistical Computing
System by Ihaka and Lang (2008) (from COMPSTAT: Proceedings in
Computational Statistics 18th Symposium) [2]
* Journal Of Statistical Software, Vol. 13, Special Volume: Lisp-
Stat, Past, Present and Future, which includes these articles [3]:
o Lost Opportunities: Why We Need a Variety of Statistical
Languages
o BClass: A Bayesian Approach Based on Mixture Models for
Clustering and Classification of Heterogeneous Biological Data
o Visualizing Experimental Designs for Balanced ANOVA Models
using Lisp-Stat
o Interactive Geographical Information System using Lisp-
Stat: Prototypes and Applications
o A Video Tour through ViSta 6.4
o Some Notes on the Past and Future of Lisp-Stat
o The Health of Lisp-Stat

And a book:

Visual Statistics – Seeing Data with Dynamic Interactive Graphics,
Forrest W. Young, Pedro M. Valero-Mora and Michael Friendly. Wiley
Series in Probability and Statistics (2006) [4]

which had a link to ViSta 7 software [5].

It seems like R is still the king in the academia for stats
programming but once Lisp was very strong too and now at least some
researchers who need more advanced and experimental calculations are
trying Lisp again. I wonder if Common Lisp will have any opportunities
in the field of statistical software development for the next decade.

1- http://incanter.wordpress.com/about/
2- http://books.google.com/books?id=8Cf16JkKz30C&pg=PA21#v=onepage&q=&f=false
3- http://www.jstatsoft.org/v13
4- http://www.uv.es/visualstats/Book/
5- http://www.uv.es/visualstats/Book/DownloadBook.htm

Leo

unread,
Sep 6, 2009, 4:33:27 AM9/6/09
to
On 2009-09-06 08:37 +0100, Emre Sevinc wrote:
> It seems like R is still the king in the academia for stats
> programming but once Lisp was very strong too and now at least some
> researchers who need more advanced and experimental calculations are
> trying Lisp again. I wonder if Common Lisp will have any opportunities
> in the field of statistical software development for the next decade.

Check out common lisp stat:
http://github.com/blindglobe/common-lisp-stat/tree/master

--
Emacs uptime: 1 day, 23 hours, 44 minutes, 24 seconds

Mirko

unread,
Sep 6, 2009, 8:31:09 AM9/6/09
to

Hi Leo,

Can you add the png files into the Doc/Talks directory so that the tex
files can compile?

Thanks,

Mirko

AJ Rossini

unread,
Sep 6, 2009, 3:02:28 PM9/6/09
to
On Sep 6, 2:31 pm, Mirko <mirko.vuko...@gmail.com> wrote:
> On Sep 6, 4:33 am, Leo <sdl....@gmail.com> wrote:
>
> > On 2009-09-06 08:37 +0100, Emre Sevinc wrote:
>
> > > It seems like R is still the king in the academia for stats
> > > programming but once Lisp was very strong too and now at least some
> > > researchers who need more advanced and experimental calculations are
> > > trying Lisp again. I wonder if Common Lisp will have any opportunities
> > > in the field of statistical software development for the next decade.
>
> > Check out common lisp stat:http://github.com/blindglobe/common-lisp-stat/tree/master
>
So currently, what my plans are with CLS (Common Lisp Statistics, or
Common LispStat) --

#1 - finishing up improved data frames, with missing values via
gensyms. And generalized, so that observations satisfy statistical
independence (not possible in R, but am enforcing this for CLS). Also
need the usual data manipulation closure tools. And management for
missing/censored/coarsened data, as well as measurement error.
#2 - numerics -- linear algebra is there, but needs a better front
end. random number generation isn't portable across common lisp
implementations yet. Optimization approaches need to be
incorporated. Still need to wrap up ODE and PDE code for managing
central tendencies for longitudinal, time-series, and network-based
regression modeling (there are issues with simple linear regression,
but it's basically verifying off of R).
#3 - since my day job is group head of a group of "pharmaceutical
quants", everything needs verification for use in a regulatory (to the
FDA, EMEA, etc) computing environment. Lots of unit testing to do.
#4 - since one of my former research areas was visualization, have to
extend the current 2-d stuff to provide a more general interface to
CAIRO and QT-PAINT infrastructure.
#5 - lots of algorithms to write, in particular general infrastructure
for randomization/resampling tools, bootstrap, cross-validation,
jackknife, general resampling for survival models, etc.

Anyway, hoping to finish before I retire, happy to work with anyone on
this. It helps me survive been a pointy-headed boss...

Jeff Shrager

unread,
Sep 9, 2009, 4:01:25 PM9/9/09
to

AJ Rossini

unread,
Sep 10, 2009, 1:21:02 AM9/10/09
to
On Sep 9, 10:01 pm, Jeff Shrager <jshra...@gmail.com> wrote:

> BTW, R was initially built on Scheme.

Not quite true -- the early interpreter for R was based on a scheme
interpreter, but I don't believe there was ever any scheme code used.

0 new messages