as a brief pre-amble : wanted to say thanks to johnmyleswhite and anyone else involved in putting this really useful tool together. I discovered it yesterday via Coursera class on Data Analysis which majors on the use of R. PT looks like it will help save me from myself when I am down in the "analysis trenches" over coming weeks.
Now the substantive point :
In trying to load earthquake data from an internet file using PT via a .url input file specification :
The data loads fine but is xxx lines of 1 variable rather than the xxx lines of 10 variables I was hoping for. The reason why I say was hoping for - the file is actually a well-formed csv file which just happens to have a .txt extension. (for instance if I just download it and change the file name to .csv it will load fine via read.csv function).
I've looked into the R code (i.e. on github) and can see that any file with .txt extension is read using a .wsv (whitespace sep variable) command - i.e. read.csv function with sep = ' ' parameter.
As a test, trying the following commands in console give results as :
Result : same as with PT - xxx lines of 1 variable
Result : the 'hoped for result' - xxx lines of 10 variables
My actual question : is it possible to specify a separator for files in this case ? Or is it a case of accept it as it is and then look to the 'munging' steps to separate out the data. Another thought strikes me - can the /data subfolder contain a script with the two lines of R code I showed there - (i.e. >con2<-url("foourl/~/foofile","r");>equake<-read.csv(con2,header=T,sep=",") - would that have the same effect ?
And a supplementary : is it possible to 'timestamp' data accessed in this way ? The 'earthquake' data here is changing from minute to minute, so it would be most useful to be able to say, here is an analysis based on data access from foofile@foourl on dd/mm/yy hh:mm-ss
Any input gratefully received !
PT : 0.4-2
Win 7 on an antique but functional Lenovo tower.