Thanks for the reply ! I did some more looking around and I found this which is what I'm trying to get at.
I was able to implement the "online-wav-gmm-decode-faster" method to transcribe some new audio at the utterance level which is helpful. But I'm still struggling to get it to the point to be able to decode new audio without having the text, utt2spk, spk2utt, segments, files for the new audio. I created blank files for text, utt2pk, segments, spk2utt and I'm using the kaldi for dummies run.sh script and the utils/validate_data_dir.sh throws an error that spk2utt is empty. So I commented out the validate_data_dir.sh script in an effort to get the make_mfcc.sh script to at least create the mfccs for the new audio. During a separate successful run I outputted the compute-mfcc-feats to see if I could some how directly call that but it doesn't look like I can do that. I'd like to be able to decode this new audio so that I can run ./step/get_ctm.sh so I can get the alignments at the word level. I definitely need to better understand this process but in the meantime thanks in advance for any help!