I don't think that 10K rows is going to be a big challenge for the CSV class (or FasterCSV if you're on ruby prior to 2.0).
With my limited knowledge of what's going on, I'd be inclined to build an array with keys for the student ids and values that are arrays. The array would contain hashes that represent the rest of the values of each line. I'd come to this arrangement with the assumption that the additional values are fairly strongly related.
I'd also plan on converting the date fields (and ints) when you import.
students = {
1111 => 'busta'
}
grades = {
1111 => [
{ last_attended: 2014-01-15, grade: 'F', end_date: 2014-05-01 },
{ last_attended: 2014-01-01, grade: 'F', end_date: 2014-05-01 },
{ last_attended: 2014-01-08, grade: 'U', end_date: 2014-05-01 }
]
}
grades.each do |student_id, grade_list|
last_attended = grade_list.min do |a, b|
a.values_at(:last_attended) <=> b.values_at(:last_attended)
end
...
end