Thanks,
Henry
Thanks,
Chris
"Henry Zaccak" <henry.zaccak(AT)gmail(DOT)com> wrote in message
news:IAC7DG.Gz...@cdf.toronto.edu...
Other courses that I didn't like but were still acceptable to me:
CSC238 and CSC364
If I didn't have Prof Pitt for CSC238 then it would have been in the hated
list.
Henry
CSCXXX, where XXX represents a 3 digit number -- just kidding by the way.
207 was a *LOT* of work, but it was interesting and I learned a lot. 258
and 209 should have moved faster.
I hated stats247 with a bloody passion. I don't even have the words to
explain my feelings toward math137. I was into phy140 for a while, but
the labs were too much for me to worry about. Ironically I dropped it
and took AST201 (an alleged GPA booster) in its place, which turned out
to be my lowest mark ever before 247.
Mike
g4mike@cdf
I think you mean:
for(int x=0; x<10;x++) {
for(int x=0; x<10;x++) {
for(int x=0; x<10;x++) {
if (CSCxyz !=null) then CSCxyz sucks;
}
}
}
Henry
I agree, except that many students still don't get it after
those hundred times, given what we see in the exam, CSC236
and other courses. The time spent on complexity proofs, and
the preparation via logic and proof, is still a lot more than
students had back when we had CSC238 instead of CSC165+CSC236.
But I/we will keep in mind your feedback.
Gary Baumgartner
>Mike
>g4mike@cdf
Also, I didn't like csc236+165 at first, but I noticed,
after my bad marks were in, I started to understand the
material a lot more. Would be a lot better as a full year
course, but I understand the complications in that too.
ECE385. Was interesting. If only life did not revolve
so much around C it would have been a lot better. CSC209
should be made a prereq.
Sta247: This R generated stem and leaf plot tells the story.
In all fairness the 0's are probably people who dropped the
course.
Final Exam Marks:
Decimal one to the left of the |
0 | 000000000000007
1 | 13588
2 | 01122336677789
3 | 000011111123333344556666677888888889999
4 | 0000011111111111222223334445555566666777777777888888889
5 | 0000111233334555566677778999
6 | 002222333334455667778999999
7 | 000133455688
8 | 4444556
> for(int x=0; x<10;x++) {
> for(int y=0; y<10;y++) {
> for(int z=0; z<10;z++) {
> if (CSCxyz !=null) then CSCxyz sucks;
> }
> }
> }
Alex
> csc209
> It's a bad combination of UNIX system and C language
Funny, I LOVED CSC209. Our exam didn't have very much shell script on it
(Spring 04).
Oh well.
- DWF
>ECE385. Was interesting. If only life did not revolve
>so much around C it would have been a lot better. CSC209
>should be made a prereq.
There's really no other option than C for the sorts of things we
were doing in 385. However, 209 should be an absolute prereq.
Suggesting 369 as good preparation would probably not be a bad
idea either, as, in it's current form, it's very C-centric.
The 258 requirement was much less important. In fact, I don't see
any reason for the 258 requirement for 209, aside from maybe a
little background that _might_ help with pointers.
Steve
>csc209
>It's a bad combination of UNIX system and C language
Is that possible? The UNIX<->C connexion is profoundly strong.
Steve
At least in the past, and still true for systems programming now.
However, I'd argue that these days every language/environment seems to have
a good high level library with it that minimizes the exposure to UNIX's
kernel and stdlib functions, thus removing the programmer from a C-dominant
environment. For example, I can write a python script to recieve, parse,
process an rpc request, then encode the result and send it back w/out ever
touching a C function. In fact, I had to for 207, and I never touched a
socket, read, write, or any other function that required knowledge of how
what I was doing was getting done. With perl, python, java, gtk, qt et all
becoming considered part of a complete UNIX environment...
Then you have the prevelence of GUIs in all moderm distros, and a user never
even has to touch a shell script.
Back in the days, when we used to walk ten miles to school in four feet of
snow with bare feet uphill both ways, and that was in the summer, we saw the
C heritage in UNIX.
my $0.02.
Christian.
> However, I'd argue that these days every language/environment seems to have
> a good high level library with it that minimizes the exposure to UNIX's
> kernel and stdlib functions, thus removing the programmer from a C-dominant
> environment.
I find there's a rampant phobia of C, probably brought on by bad
experiences in the past. Really, C isn't that bad. In fact, it can be a
really wonderful language, I find.
> For example, I can write a python script to recieve, parse,
> process an rpc request, then encode the result and send it back w/out ever
> touching a C function. In fact, I had to for 207, and I never touched a
> socket, read, write, or any other function that required knowledge of how
> what I was doing was getting done. With perl, python, java, gtk, qt et all
> becoming considered part of a complete UNIX environment...
perl has been there for years, we can only hope python gets there soon.
Java, I'm not so sure about, since Sun's not showing any signs of caving
in to open sourcing Java. From what my friend at Red Hat tells me we may
be slaves to Sun for only a short while longer; people in their R&D now
have Eclipse running on a 100% GPL Java VM. Natively compiled Java would
also be nice, and on the desktop I'd trade speed of execution for the
extra layer of security incurred by the VM layer.
That said, I don't see Java filling the same niche as C - you haven't got
the speed, flexibility, nor the direct access to low level data structures
that many applications require. Even Subversion, while a relatively young
project (and hardly a system-level pursuit) is mostly written in C, for
the former of the two reasons as far as I gather.
gtk and qt, on the Linux/BSD desktop, maybe. However GTK (the better of
the two libraries IMHO) is still pretty tied to C - it's got bindings for
other things but I've always found them to be kinda wonky.
> Then you have the prevelence of GUIs in all moderm distros, and a user never
> even has to touch a shell script.
While I think that Bourne shell scripting is almost the worst thing but
for Tcl, I think it's a detriment to the average UNIX user to not be able
to hack together a quick script for getting something done. The shell is a
thing of beauty. Between piping commands, wildcards, regexps, various
scripting languages, I'm able to get a lot more done a lot more quickly,
even in day to day life, just through some basic knowledge of the shell.
Example: a client of mine sent me a bunch of .vcf Address Book files from
Outlook that he wanted imported into our custom mass mailer. The format
is plain text. I could've sat there and copy'd/pasted like a chump, but
cat, piped to grep, piped to cut, dumped to a file turned a several hour
job into 30 seconds.
Long live the command line.
- dwf
>>>csc209
>>>It's a bad combination of UNIX system and C language
>>
>> Is that possible? The UNIX<->C connexion is profoundly strong.
>At least in the past, and still true for systems programming now.
>However, I'd argue that these days every language/environment seems to have
>a good high level library with it that minimizes the exposure to UNIX's
>kernel and stdlib functions, thus removing the programmer from a C-dominant
>environment. For example, I can write a python script to recieve, parse,
>process an rpc request, then encode the result and send it back w/out ever
>touching a C function. In fact, I had to for 207, and I never touched a
>socket, read, write, or any other function that required knowledge of how
>what I was doing was getting done. With perl, python, java, gtk, qt et all
>becoming considered part of a complete UNIX environment...
I've worked with guys who've been programming for several
years, making tons of money, but don't really understand what
pointers are. They live in a C# or Java bubble, often away from
fancy algorithms and other complexities. Now, that's not
necessarily bad. Dealing with strings and buffers in C can be
both painful, and highly error prone. Naturally, giving up some
performance for much higher programming throughput and lower bug
count is a good thing, but I still feel that it's very important
to know what's really going on. After all, we're being trained
to be scientists.
Problems often arise when their code is tremendously inefficient,
and needs some machine-dependent stub written in C or C++ to get
things moving, or when one looks toward embedded hardware, or this,
or that, or whatever. The thing is, one is really at a severe
disadvantage if they don't know what is going on beneath them in
the programming world, especially if they ever step outside of
their bubble. Irregardless of what language a project used, I'd
be extremely hesitant to hire a person who didn't understand C
(or a similar lowish level language), system calls, library functions,
etc. Being able to type in import java.foo.bar.wheee and use an
object without some appreciation of what it does just won't cut it.
209's content seems necessary, at least insofar as it prepares
one for CSC369, or even ECE385. People still do need to know C as
there really isn't a better language yet for writing an OS.
Steve
> In article <IAGByF.I3...@cdf.toronto.edu>, Mike C <g4mike@cdf> wrote:
> >I don't think I've had a CS course so far that I've "hated". I thought
> >165 should have spent more time on complexity and less on the
> >straightfoward stuff. I swear we must had a month of "okay, for the
> >hundredth *****ing time, if a->b it does not mean that b->a"
>
> I agree, except that many students still don't get it after
> those hundred times, given what we see in the exam, CSC236
> and other courses.
I suspect that students are well aware that a->b is different from
b->a. I offer two alternative hypotheses.
If a and b are very long and intricate statements (for example they
contain several quantifiers and further ->'s), after finishing reading
them it is very likely to forget the correct direction of the ->
connective between them, or even that it is -> rather than /\. Human
brains are simply too easy to buffer-overrun, especially during exams.
Some students do a wrong thing knowingly because they can't solve the
original problem and they think they can solve a modified problem (or
make a ridiculous assumption) and get part marks. A student who can't
prove a->b thinks naturally that he may still get some marks for
proving b->a; it is also quite natural to refrain from admitting it in
writing and see how far he can get (in terms of fooling the grader for
marks).
I have seen 4th year CS students who used 1<0 to finish their proofs.
Do you honestly think that he believed what he wrote and needed more
teaching? Clearly either he was in such a hurry that he didn't stop
to even look at it, or he knew it was wrong but he felt that handing
in falsehood were better than handing in blank.
Yes I know, don't attribute to malice what can be explained by
stupidity. But the students are neither malicious nor stupid, just
desperate.
Don't attribute to stupidity what can be explained by despair.
Albert, I think your statement is definitely an improvement on the
original - there are some cases where neither malice nor stupidity comes
into play. In CS at UofT, these cases are by far the most common :)
A PART of the last assignment was implementing a database using B*-Trees in
C++ from scratch. I remember our group handing it in way after the classes
ended...
Alper
"Henry Zaccak" <henry.zaccak(AT)gmail(DOT)com> wrote in message
news:IAC7DG.Gz...@cdf.toronto.edu...
Alex
I'm not just basing this on written work. I'm basing it on one on one
interaction with hundreds of students, including many who are not
in my courses and therefore have come to me purely for help and not
marks (current examples are students asking me about MAT137/157/240).
There is a very interesting book called "Inference and Understanding"
(I don't have it handy so I may have gotten the title wrong) that
goes into the cognitive reasons (based on experiments that aren't
homework assignments) that people have trobule with implication.
But your alternatives certainly apply in some cases (but then let's
not forget students who accidentally pick the right direction, since
we're accepting that there are students who intentionally pick the
wrong one).
>If a and b are very long and intricate statements (for example they
>contain several quantifiers and further ->'s), after finishing reading
>them it is very likely to forget the correct direction of the ->
>connective between them, or even that it is -> rather than /\. Human
>brains are simply too easy to buffer-overrun, especially during exams.
>
>Some students do a wrong thing knowingly because they can't solve the
>original problem and they think they can solve a modified problem (or
>make a ridiculous assumption) and get part marks. A student who can't
>prove a->b thinks naturally that he may still get some marks for
>proving b->a; it is also quite natural to refrain from admitting it in
>writing and see how far he can get (in terms of fooling the grader for
>marks).
>
>I have seen 4th year CS students who used 1<0 to finish their proofs.
>Do you honestly think that he believed what he wrote and needed more
>teaching? Clearly either he was in such a hurry that he didn't stop
>to even look at it, or he knew it was wrong but he felt that handing
>in falsehood were better than handing in blank.
I am well aware of this. I saw a lot of it on the CSC236 exam (along
with words like "clearly" and "of course"), and kept thinking "nice
try", and then deducted marks for having done something wrong
(as opposed to giving marks for having done something).
>Yes I know, don't attribute to malice what can be explained by
>stupidity. But the students are neither malicious nor stupid, just
>desperate.
>
>Don't attribute to stupidity what can be explained by despair.
We consider this the explanation for a lot of the plagiarism we see.
Gary Baumgartner
I agree. CSC 228 was about the only computer science course I really
hated. (Well, CSC 384 was a
dissapointment, but not as bad.)
I remember having to learn basically all of C++'s iostream libraries on
my own for that course. We
were specifically forbidden from using C's scanf() and such (which I saw
as much more suitable for
the tasks we were given). About the only thing I learned in that course
was a deepened hatred of
C++. I think I could have done each of the assignments in a few lines of
perl (with its built-in hashes,
regexes, and garbage collection) instead of the hundreds of lines of C++.
However, I didn't mind having to implement the B Trees, since I think
that's an interesting data
structure.
-- AMH
MW
"Tom" <tom....@utoronto.ca> wrote in message
news:IC43Ht.HI...@cdf.toronto.edu...