Dear Alvaro,
performing cross-validation is relatively straightforward using the function classify(), without any manual swaps between the two sets. You define your "primary_set" and the "secondary_set", and then you type e.g.:
classify(cv.folds = 10)
or, if you want to have an access to particular cv folds:
# perform the classification:
results = classify(cv.folds = 10)
# get the classification accuracy:
results$cross.validation.summary
This will give you the stratified cross-validation, or the variant that reproduces the representation of classes from your "training_set" in N random iterations.
Now, there is a function crossv() that is meant to replace some core fragments of classify() in the future. I am not there yet, though (as always: lack-of-time-related issues). So far, it's a beta-version function with some basic functionality. To perform leave-one-out, you prepare the "training_set" only, and put your stuff there. Then you have to load the corpus, and prepare a document-term matrix. Let's assume you've already got it:
library(stylo)
data(galbraith)
type help(galbraith) to see what the matrix contains. Then you type:
crossv(training.set = galbraith, cv.mode = "leaveoneout", classification.method = "svm")
To build the document-term matrix, some more steps have to be undertaken beforehand:
library(stylo)
# loading the corpus
texts = load.corpus.and.parse(files = "all", corpus.dir = "corpus")
# getting a genral frequency list
freq.list = make.frequency.list(texts, head = 1000)
# preparing the document-term matrix:
word.frequencies = make.table.of.frequencies(corpus = texts, features = freq.list)
# now the main procedure takes place:
crossv(training.set = word.frequencies, cv.mode = "leaveoneout", classification.method = "svm")
I hope this helps.
All the best,
Maciej