New package "fulltext" ~ We want your feedback!

97 views
Skip to first unread message

Scott Chamberlain

unread,
Aug 8, 2014, 12:39:12 PM8/8/14
to ropensci...@googlegroups.com
Hello!

An area we hope to simplify is acquiring text data in R, specifically text from scholarly journal articles. We call this R package `fulltext`. The goal of `fulltext` is to allow a single user interface to searching for and retrieving full text data from scholarly journal articles. Rather than learning a different interface for each data source, you can learn one interface, making your work easier and faster. `fulltext` will only get you data (but hopefully do that very well), and make it easy to browse that data, and use it downstream for manipulation, analysis, and visualization.

We currently have R packages for a number of sources of scholarly article text, including for Public Library of Science (PLOS), Biomed Central (BMC), and eLife - which could all be included in `fulltext`. We can add more sources as they become available.

Instead of us rOpenSci core members planning out the whole package, we'd love to get the community involved at the beginning.
  • What use cases should we include in this package?
  • What data sources should/can be included?
  • What are packages that you'd use after getting data with `fulltext`? We can make it easy to use data from `fulltext` in your favorite packages for analysis/visualization.
  • Any other thoughts are welcome.
Respond here in the mailing list.  We can elevate items to the issue tracker for the package on Github as needed.

Thanks! Scott

François Michonneau

unread,
Aug 8, 2014, 4:49:36 PM8/8/14
to ropensci...@googlegroups.com

Hi all,

  One thing that a package like this could facilitate is to identify parts of the tree of life that are studied but not resolved. In other words, are they species/genera/families that are regularly included in ecological/physiological studies but that are not represented in genbank/treebase?

  The challenge here would be to identify the species names from the full text of the articles but I can imagine that by querying taxonomic databases with taxize it might not be impossible...

  Cheers,
  -- François



--
You received this message because you are subscribed to the Google Groups "ropensci-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ropensci-discu...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Noam Ross

unread,
Aug 8, 2014, 5:16:17 PM8/8/14
to ropensci...@googlegroups.com
Hey Scott,

Cool.  A few quick points: 

 - I might use this package to look at trends in use of scientific terms, so I'd likely send results from `fulltext` to `tm` for textual analysis, though I haven't fully explored other text-mining packages yet.
 - I think you already know about the CrossRef data mining API (http://tdmsupport.crossref.org/). I'd suggest that for including journals from many publishers.
 - I hope the package does support subscription sources, but has some to elegant way to report when it fails to retrieve text because it hits a paywall

 - Noam

Karthik Ram

unread,
Aug 8, 2014, 5:18:24 PM8/8/14
to ropensci...@googlegroups.com
 I hope the package does support subscription sources, but has some to elegant way to report when it fails to retrieve text because it hits a paywall

This would be tricky but we can try. Perhaps add some optional services that require (paid) API keys or only work in institutions that have subscriptions. I don't think we can easily subvert paywalls.


--

David Winter

unread,
Aug 8, 2014, 5:23:57 PM8/8/14
to ropensci...@googlegroups.com
Hi all,

One obvious data source not mentioned above is Pubmed Central
(http://www.ncbi.nlm.nih.gov/pmc/), which contains green OA versions
of millions of papers thanks to the NIH mandate.

It's integrated with the NCBI's entrez API, so the functions in
rentrez let users query/fetch and discover links between PMC articles
and other databases. But, since I don't do any data mining and I'm not
sure how people would use it, I've never attempted to write a parser
for the PMC xml or spent much time thinking about how to make the most
of the repository and its metadata.

If would-be users have ideas about how they could use PMC I'd be happy
to help to incorporate them, either into this package or rentrez.

David
> --
> You received this message because you are subscribed to the Google Groups
> "ropensci-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ropensci-discu...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
David Winter
Postdoctoral Research Associate
Center for Evolutionary Medicine and Informatics
The Biodesign Institute
Arizona State University

ph: +1 480 519 5113
w: www.david-winter.info
lab: http://cartwrig.ht/lab/
blog: sciblogs.co.nz/the-atavism

Carl Boettiger

unread,
Aug 8, 2014, 5:24:02 PM8/8/14
to ropensci...@googlegroups.com
Noam: are you referring to accessing a subscription publisher via paid
API or by scraping the fulltext from the publisher's website (e.g.
over a university connection). Note that I believe the fulltext
package as described is only accessing publisher's data through
dedicated API portals for this purpose, not web scraping (which as you
know is more complex legally and less robust technically).
--
Carl Boettiger
UC Santa Cruz
http://carlboettiger.info/

Noam Ross

unread,
Aug 8, 2014, 6:13:57 PM8/8/14
to ropensci...@googlegroups.com
I didn't mean that anyone should thwart paywalls.  I was referring to accessing the subscription content via publisher APIs.  ScienceDirect, for instance, has a full-text API that requires a key that which should be available to anyone at an institution with ScienceDirect access at some point this year.  Hopefully CrossRef TDM will facilitate this so that you don't need to write interfaces to each publisher API.

I was thinking of a case where one has a bunch of DOIs generated by a search, and then you use `fulltext` to pull the full-text articles.  Some will be open access, some will be accessible from your institution, some will be closed-content, and some will fail due to technical reasons.  It's useful to distinguish between those cases in what `fulltext` returns.
Message has been deleted

Scott Chamberlain

unread,
Aug 8, 2014, 6:44:44 PM8/8/14
to ropensci...@googlegroups.com
Hi François, 

Good idea!  We'll put that use case in the issue tracker https://github.com/ropensci/fulltext/issues/3

Best, Scott
To unsubscribe from this group and stop receiving emails from it, send an email to ropensci-discuss+unsubscribe@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to ropensci-discuss+unsubscribe@googlegroups.com.

Scott Chamberlain

unread,
Aug 8, 2014, 6:55:14 PM8/8/14
to ropensci...@googlegroups.com
Hi David, 

Thanks for pointing out this oversight :)  Definitely Pubmed Central.  Yeah, if it doesn't make sense to add some functionality in rentrez for parsing the XML, we can just do that in this new package. I think raw XML should be fine as a input to `fulltext`.  

Best, Scott

Carl Boettiger

unread,
Aug 12, 2014, 12:38:06 PM8/12/14
to ropensci...@googlegroups.com
Hi ropensci-discuss,

Not sure if this would be in scope as a use-case, but I'd be curious to try and identify papers that have published code (perhaps separately, those that have released that code as software intended for reuse, e.g. R package instead of a script), by journal / discipline etc.  It might be nice to pull out citation data (from the CrossRef API or other sources) as well -- I'd love to see more examples asking if any of these things (providing code at all, providing software, maybe depending on discipline or software license if possible) is associated with increased citation rates (or perhaps other metrics).

For starters, I'd be curious to see how MEE's application notes compare to the journal overall (whether or not that's really in scope for fulltext). There's a growing literature on the effect of data publication, but much less on code publication (http://rr.epfl.ch/17/1/VandewalleKV09.pdf).   

Cheers,

Carl


To unsubscribe from this group and stop receiving emails from it, send an email to ropensci-discu...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Scott Chamberlain

unread,
Aug 12, 2014, 1:41:37 PM8/12/14
to ropensci...@googlegroups.com
Good idea Carl. 

I think your idea is in scope for sure.  I wonder if it will be easy enough to get supplementary info from journals as perhaps the code will be needed for this use case?  

Pull out citation data: yeah, should be straightforward with crossref. 

Best, Scott

Karthik Ram

unread,
Aug 12, 2014, 2:05:41 PM8/12/14
to ropensci...@googlegroups.com
I think that's an excellent use case, Carl. 
We could even develop some set of heuristics that might identify papers (with some probability) that have these attributes. Perhaps we could pick out a sample of PLOS papers and try these out. I'd be willing to give this a shot.


On Tue, Aug 12, 2014 at 9:38 AM, Carl Boettiger <cboe...@gmail.com> wrote:

Saul Wiggin

unread,
Aug 21, 2014, 4:37:27 AM8/21/14
to ropensci...@googlegroups.com
Hi ropensci-discuss,

I was wondering what the issue with the Mendeley API that has been mothballed. Is that down to Mendeley or the implementation in Ropensci? I've used Mendeley for research and it is excellent for getting around 'pay wall' barriers. Also are you using natural language processing for analysing the journal articles or are you just using data repo's? 

thanks,
Saul Wiggin

Saul Wiggin

unread,
Aug 21, 2014, 4:48:51 AM8/21/14
to ropensci...@googlegroups.com
Hi ropensci-discuss,

I was wondering if I could suggest using the plant list for an R package http://www.theplantlist.org/1.1/about/. It's a working directory of plants with a large collection of mosses. It looks like it allows downloads so it should be possible to create an r package from that. 

Regards,
Saul Wiggin

Scott Chamberlain

unread,
Aug 21, 2014, 9:48:10 AM8/21/14
to ropensci...@googlegroups.com
Hi Saul,

I'll let someone else answer your question about Mendeley.

For fulltext, we haven't really started the package yet, so we'd love
to hear use cases. What do you think should be included?

Thanks, Scott

Scott Chamberlain

unread,
Aug 21, 2014, 9:52:36 AM8/21/14
to ropensci...@googlegroups.com
Hi again Saul, 

Great idea about Theplantlist.  As luck will have it, there are already 2 R packages that interact with TPL, one on CRAN (http://cran.r-project.org/web/packages/Taxonstand/index.html), and another only on Github (https://github.com/gustavobio/tpl). taxize used to wrap functions from Taxonstand, but now we suggest users just use those functions directly. taxize does have functionality to download bulk data from Theplantlist though. 

Cheers, Scott


--

Karthik Ram

unread,
Aug 21, 2014, 7:42:53 PM8/21/14
to ropensci...@googlegroups.com
Hi Saul,
We had a working version of the RMendeley package. I worked on the second version that included OAuth allowing for access to the remaining API methods. The whole thing was implemented using RCurl and ROAuth. While we were waiting for ROAuth to go on CRAN, Mendeley changed their API rendering the previous work useless. 

So that's why we stopped development. Now developers at Mendeley have chosen to work on the replacement R package. There is no ETA at the moment but I will update the list as we know more.

> Also are you using natural language processing for analyzing the journal articles or are you just using data repo's?

We're not building any of NLP stuff at the moment (so that's wide open if you want to jump in). As with other packages in R, we fill a gap but hope that Rs rich ecosystem of tools can pick up at the next step. For e.g. have you seen: http://cran.r-project.org/web/views/NaturalLanguageProcessing.html?

Cheers,
Karthik



--

Andrew Defries

unread,
Dec 18, 2014, 5:37:10 AM12/18/14
to ropensci...@googlegroups.com
Hello, 

I would like to do large scale pesticide research using your package. I have lists of pesticides that I would like to use to accumulate information from open source journals. Since each pesticide is known by multiple names I have other tables to lookup synonyms and commercial names, etc. 

What workflow do you suggest to use your package to search for a term such as "atrazine" occurring in your suite of open source journals? I would like the paper as PDF, the full text to send to tm or downstream NLP software, and I would also like a list of occurrences as DOIs, so that I can list results in a table with the DOI.

A single list can have ~2,000 pesticide compounds. When we consider the several synonyms that may be standardized chemical nomenclature (IUPAC), commercial names, short names, etc, this inflates the search space to 8x-20x searches per compound. 

Looking forward to a dialogue,

Andrew Defries PhD


Scott Chamberlain

unread,
Dec 18, 2014, 11:36:54 AM12/18/14
to ropensci...@googlegroups.com
Hi Andrew, 

Thanks for your message! This is a great use case. I have a few questions to clarify what you want. 

* You said you want PDFs. Is that to archive them for human reading?  Or do you want to extract text from the PDFs?  What I'm getting at is if XML is available, that might be better for text mining since it's structured data that can be queried more precisely. And file sizes for XML will be much smaller than PDFs.  Of course XML won't always be available, but you could choose that if available. 
* You mentioned downstream software for text mining. Do you have anything else in mind besides the tm package?
* Getting the list of DOIs is no problem. 

The workflow would probably be to first search combination of all the search tools: Crossref, Entrez, BMC, PLOS, arXiv. Then take those results and fetch full text that is available. Then pass full text down to tm or other text mining packages.  I'll work on getting some example code, and respond here to get you started. 

p.s. Keep in mind that this package is in early development, so expect changes. 

Cheers, Scott

Chris Stubben

unread,
Dec 18, 2014, 2:49:01 PM12/18/14
to ropensci...@googlegroups.com
At least for PMC Open Access, if you want to search for 2000 different pesticides, you probably should download all 1 million XML files in the four article* files in ftp://ftp.ncbi.nlm.nih.gov/pub/pmc and search those directly.  You can load them in R using xmlParse(file) and I recently looped over all 1 million files to count tables (1.46 million) and supplements (300K) by journal and that only took a few hours, so you could modify that for pesticide searches and save the matching text.  Depending on what you need, the xpath queries can be difficult and I constantly find new problems like some XML with tables nested in paragraph tags.

Also,  if you want to return full text from specific PMC search results, you could use the pmcXML package and related functions that help with XML parsing  (see https://github.com/cstubben/pmcXML and like fulltext, this is in development and will change, hopefully based on suggestions from users). I apologize for the long email below, but here are the basic steps.


1. Run a query in PMC and get ids

atrazine AND open access[FILTER]   #911 records
atrazine[Body - All Words]  AND open access[FILTER] #610
# or 56 with atrazine in title

atz <- ncbiPMC("atrazine[TITLE] AND open access[FILTER]")
names(atz)
[1] "pmc"      "authors"  "year"     "title"    "journal"  "volume"   "pages"    "pubdate"  "epubdate" "pmid"     "doi"    

subset(atz, journal=="BMC Genomics")

        pmc                                     authors year
46 PMC2242805 Ramel F, Sulmon C, Cabello-Hurtado F, et al 2007
                                                                                                                                                                   title
46 Genome-wide interacting effects of sucrose and herbicide-mediated stress in Arabidopsis thaliana: novel insights into atrazine toxicity and sucrose-induced tolerance
        journal volume pages pubdate   epubdate     pmid                     doi
46 BMC Genomics      8   450         2007/12/05 18053238 10.1186/1471-2164-8-450


Note: ncbiPMC uses old functions in the BioC genomes packages for the E-utility scripts, which I need to replace with rentrez scripts and drop that package dependency.  I will keep the parser to create the summary table above in the package.


2.  Download XML

There are two options for automated downloads.  pmcOAI uses the PMC OAI service and removes the namespace for easier XPath queries and adds carets (^) within superscript tags and hyperlinked table footnotes for displaying as plain text.

 doc <- pmcOAI("PMC2242805")

## since this returns an XMLInternalDocument, all the related functions in the XML package should work.
 summary(doc)
 getNodeSet(doc, "//abstract")


The other option is to use the PMC FTP site, but this currently requires loading a local copy of the million row file_list.txt file to get the directory name (any suggestions on how to avoid this would be welcome)

 pmcfiles <- read.delim( "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.txt" , skip=1, header=FALSE, stringsAsFactors=FALSE)
 nrow(pmcfiles)
[1] 941083

names(pmcfiles)<-c("dir", "citation", "pmcid")
subset(pmcfiles, pmcid == "PMC2242805")
                                              dir                        citation      pmcid
122831 a0/ff/BMC_Genomics_2007_Dec_5_8_450.tar.gz BMC Genomics. 2007 Dec 5; 8:450 PMC2242805


pmcFTP( "PMC2242805")

trying URL 'ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/a0/ff/BMC_Genomics_2007_Dec_5_8_450.tar.gz'
...
Saved to ./PMC2242805

# the ftp site has xml and PDF copies of the paper that you mentioned in your email, plus figures incl thumbnails and supplements.

list.files("PMC2242805")
 [1] "1471-2164-8-450-1.gif"   "1471-2164-8-450-1.jpg"   "1471-2164-8-450-2.gif"   "1471-2164-8-450-2.jpg"   "1471-2164-8-450-3.gif"   "1471-2164-8-450-3.jpg" 
 [7] "1471-2164-8-450-4.gif"   "1471-2164-8-450-4.jpg"   "1471-2164-8-450-5.gif"   "1471-2164-8-450-5.jpg"   "1471-2164-8-450.nxml"    "1471-2164-8-450.pdf"   
[13] "1471-2164-8-450-S10.pdf" "1471-2164-8-450-S11.pdf" "1471-2164-8-450-S12.pdf" "1471-2164-8-450-S1.pdf"  "1471-2164-8-450-S2.pdf"  "1471-2164-8-450-S3.pdf"
[19] "1471-2164-8-450-S4.pdf"  "1471-2164-8-450-S5.pdf"  "1471-2164-8-450-S6.xls"  "1471-2164-8-450-S7.pdf"  "1471-2164-8-450-S8.pdf"  "1471-2164-8-450-S9.pdf"
[25] "license.txt"           

# this xml does not have a namespace like OAI
doc2 <- xmlParse("PMC2242805/1471-2164-8-450.nxml")


3.  Parse XML  ( see the wiki page at https://github.com/cstubben/pmcXML/wiki/Parse-xml for more details )

The package currently includes functions to parse the metadata, full text, tables, references and optionally load supplements (all of this was written  to index in Apache Solr, so I have not thought too much about text-mining applications)

# pmcMeta creates a list of metadata fields and also gets mesh terms from PubMed

meta <- pmcMeta(doc)

names(meta)
 [1] "id"              "title"           "author_display"  "year"            "journal"         "volume"          "pages"           "journal_display"
 [9] "citation"        "doc_type"        "doc_source"      "epubdate"        "pubdate"         "first_author"    "publisher"       "pmid"          
[17] "pmcid"           "doi"             "URL"             "author"          "affiliation"     "keywords"        "mesh"            "license"     


# pmcText splits the document into a list of subsections (with full path to subsection title) and each subsection is either a vector of paragraphs or sentences

txt <- pmcText(doc, sentence=FALSE)
txt <- pmcText(doc)

 sapply(txt, length)
                                                                                                                 Main title
                                                                                                                          1
                                                                                                                   Abstract
                                                                                                                          8
                                                                                                                 Background
                                                                                                                         25
                                                          Results; Physiological effects of atrazine and sucrose treatments
                                                                                                                         12
                                                         Results; Effects of atrazine and sucrose on global gene expression
                                                                                                                         17
                                                        Results; Identification of protection-related functional categories
                                                                                                                         27
Results; Characterization of atrazine xenobiotic and oxidative effects: evidence for deleterious effects on gene regulation
                                                                                                                         34
     ...

I added another function to simplify searches using grep and this finds 154 atrazine mentions.  The important question to consider is whether the structure of the document really matters - do you care if atrazine is in the abstract, section tile, caption, a specific section below and so on, or do you just want a giant text blob to pass to the tm or other package?


x <- searchPMC(txt, "atrazine")

head(x)
data.frame(table(x$section))
                                                                                                                          Var1 Freq
1                                                                                                                     Abstract    5
2                                                                                                                   Background    8
3                                                                                                                   Conclusion    1
4                                                                                                                   Discussion   28
5                                                                                                               Figure caption    8
6                                                                                                                   Main title    1
7                                                                   Methods; Microarray data validation and qRT-PCR experiment    1
8                                                                                Methods; Plant material and growth conditions    2
9                                                                               Methods; RNA isolation and microarray analysis    1
10 Results; Characterization of atrazine xenobiotic and oxidative effects: evidence for deleterious effects on gene regulation   20
11                Results; Differential expression of specific transcription factors during sucrose-induced atrazine tolerance    8
12                                                          Results; Effects of atrazine and sucrose on global gene expression    7
13                                                         Results; Identification of protection-related functional categories   17
14                                                           Results; Physiological effects of atrazine and sucrose treatments    4
15                  Results; Specific effects of combined sucrose plus atrazine treatment on tolerance-related gene regulation   12
16                     Results; Time-course of induction of transcription factors during sucrose-dependent atrazine protection    6
17                                                                                                               Section title    6
18                                                                                                          Supplement caption   14
19                                                                                                               Table caption    5



At least for the tm package, you can convert this list into a Corpus  (but again I have little experience with traditional text mining)

library(tm)
Corpus(VectorSource(txt))


# pmcTable creates a list of data.frames. This functions uses rowspan and colspan attributes within the th and td tags to correctly format and repeat cell values as needed, for example table 1 has a three row header in columns 3-5 which is collapsed into a single name.

x <- pmcTable(doc)
Parsing Table 1 Induction by atrazine of genes involved in xenobiotic and oxidative stress response
Parsing Table 2 Repression by atrazine of genes involved in xenobiotic and oxidative stress response
Parsing Table 3 Selected atrazine-regulated genes that may be involved in atrazine injury
Parsing Table 4 Genes potentially involved in sucrose-induced atrazine tolerance
Parsing Table 5 Transcription factors potentially involved in sucrose-induced atrazine tolerance


lapply(x, head)
$`Table 1`
  Accession number                                                  Gene description log2(ratio): Treatment comparison: MA/M
1        At1g06570                        4-hydroxyphenylpyruvate dioxygenase (PDS1)                                    3.19
2        At1g33110                                        MATE efflux family protein                                    2.17
3        At1g53580 Hydroxyacylglutathione hydrolase, putative/glyoxalase II putative                                    1.75
4        At1g70610                                            ABC transporter (TAP1)                                    2.20
5        At1g80160                                       Glyoxalase I family protein                                    2.03
  log2(ratio): Treatment comparison: S/M log2(ratio): Treatment comparison: SA/M
1                                  -0.76                                    2.12
2                                    nde                                    1.88
3                                    nde                                    1.06
4                                    nde                                    1.38
5                                  -1.81                                     nde

# The function also adds a bunch of additional attributes

attributes(x[[1]])

$id
[1] "PMC2242805"

$file
[1] "http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2242805/table/T1"

$label
[1] "Table 1"

$caption
[1] "Induction by atrazine of genes involved in xenobiotic and oxidative stress response"

$footnotes
[1] "nde: not differentially expressed, genes with a Bonferroni P-values higher than 5% were considered as being not differentially expressed as described in Lurin et al. [75]."

 I index tables in three different formats in Solr (original text blob, delimited text, and also as a collapsed row, since each one changes relevancy scoring).  Also the collapse2 function here will detect and repeat subheaders and therefore increase its term frequency (row id is optional, mainly added for Solr highlighting)

collapse2(x[[1]])
[1] "Row 1 of 5; Accession number=At1g06570; Gene description=4-hydroxyphenylpyruvate dioxygenase (PDS1); log2(ratio): Treatment comparison: MA/M=3.19; log2(ratio): Treatment comparison: S/M=-0.76; log2(ratio): Treatment comparison: SA/M=2.12."                    
[2] "Row 2 of 5; Accession number=At1g33110; Gene description=MATE efflux family protein; log2(ratio): Treatment comparison: MA/M=2.17; log2(ratio): Treatment comparison: S/M=nde; log2(ratio): Treatment comparison: SA/M=1.88."    


## References

y<- pmcRef(doc)


#  Supplements.  This lists the links to supplements mentioned in the full text.

z<- pmcSupp(doc)
[1] "label"   "caption" "file"    "type"

z[12,]

If you have unix tools for some systems commands, you can get Excel, Word, HTML, PDF, text and compressed files.  I need to fix this to optionally read from a local file downloaded with pmcFTP rather than the link..

 s12 <- pmcSupp(doc, 12)
Downloading Additional file 12
[1] "Returned 35 rows"
 s12
 [1] "Genes selected for qRT-PCR analysis and primer sequences"                   
 [2] "Accession"                                                                  
 [3] "number Gene description Forward sequence Reverse sequence"                  
 [4] "At1g06570 4-hydroxyphenylpyruvate"                                          
 [5] "dioxygenase (PDS1)"                                                         
 [6] "TCGCTCGTCGCTTCTCCTG TGTGGTTGTCGGTTTAATCTCTCC"                               
 [7] "At1g42990 bZIP transcription factor"                                        
 [8] "family protein"                                                             
 [9] "TCTGCTGTGCTCTTGTTGGAATC GAACCCTTACATCTCCGACTAACG"

Good luck,

Chris



Scott Chamberlain

unread,
Dec 18, 2014, 4:42:37 PM12/18/14
to ropensci...@googlegroups.com
Chris, Thanks for this and the code examples. 

This is useful. I added a comment on an issue in fulltext for maybe importing your package, and I pinged you there. 

Scott
To unsubscribe from this group and stop receiving emails from it, send an email to ropensci-discuss+unsubscribe@googlegroups.com.

Scott Chamberlain

unread,
Jan 2, 2015, 6:22:11 PM1/2/15
to ropensci...@googlegroups.com
Andrew, 

We just started a new forum, which should be a big improvement over Google Groups. If you wouldn't mind reposting your question there http://discuss.ropensci.org/ that would be great - and we can continue there. 

Scott
...

Andrew Defries

unread,
Jan 2, 2015, 8:20:59 PM1/2/15
to ropensci...@googlegroups.com

Yes sir I will. Apologies for the delay in response. I have tried a number of your suggestions.

Will repost.

--
You received this message because you are subscribed to a topic in the Google Groups "ropensci-discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ropensci-discuss/T5aPR8e9RYc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ropensci-discu...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages