I noticed in the final line of your log file, the following command was being run.
parallel_assign_taxonomy_uclust.py -i open_otu_pick/uclust_97/rep_set.fna -o open_otu_pick/uclust_97/uclust_assigned_taxonomy -T --jobs_to_start 16
Each one of those jobs will index a fully copy of your database and hold it in memory. It possible that with 16x jobs, your computer runs out of RAM and everything slows to a crawl.
You could try checking in on your available memory during this step to see if all your RAM is being used. (You can do this using the program 'top' or a system monitor.)
You could also try uses fewer parts, say 4, so that you are less likely to run out of RAM.
Let me know if that helps!