Logic Pro X has an audio file editor and within this file editor there is a pencil tool with which you can redraw waveforms. With my skills, you may not always get the shape you hoped to draw! Also you have trimming tools and copy paste functions with which you can chop up recorded sound. These audio editing functions are not useful to me other than to get rid of unwanted clicks. Since Logic grew out of a MIDI sequencer, it's likely Adobe Audition and Pro Tools are more intuitive for audio editing. Hope this helps.
I haven't done any audio editing on the files themselves within Logic, though I do a fair bit with the regions when it comes to trimming and fade automation.
The case Tusker mentions, unwanted clicks or sounds, I tend to try to do some automated EQ to get rid of it (though it would depend on the sound). For instance, my friend's mic is not great and he sent me a vocal track with a really bad "F" consonant....sounded like a loud burst of static, no de-esser or tool helped, so I found the worst frequencies, pulled them a fair bit down and then automated the EQ to come on just for the beginning of the F. Lots of ways to skin a cat sometimes!
The file editor and pencil tools work very similarly in Logic and ProTools.
in Logic you need to first get into the File Editor, that feature is not on the main sequencing or arrange view.
Physion offers more effects options but the EQ in Split EQ is more versatile. Both good tools that I use here. I don't have Logic or Pro Tools although I am considering Logic. I'm using Waveform Pro 12, there are many options to treat audio but I think Eventide has hit the mark with their method of splitting transients and tones. Very clean work.
6.Does adjusting the volume fader impact the plugins processing power/quality at all? I read that volume fader comes after all the plugins and sends, flex time/pitches, gain adjustments. Is that true? Fader comes even after automation?
1) In the file editor, click the local View menu and you'll see "Amplitude Percentage" is ticked. You can choose "Amplitude Sample Value" as an alternative, both of those things should be self-explanatory.
Remember, the sample/file editor is showing the contents of the *file* - it has no idea what you've set the fader to of the mixer channel it's eventually playing through, so you can't expect it to be displaying the level you might eventually *hear* this file during playback. The sample/file editor is showing your the audio file contents, and providing various editing facilities on the file itself - none of this has anything to do with Logic's mixer.
-6dBFS is louder than -18dBFS, and 0dBFS is louder than -6dBFS. The clue is the minus numbers - effectively -infinity is silence, and then it gets louder up to 0dBFS (and beyond into positive numbers). So the level from -18dBFS to -6dBFS to 0dBFS is getting bigger/louder in every case.
You can see roughly where -18dBFS is on the meters, so you have a meter ballpark to aim for. You don't need to be exact, it's just a handy guide to roughly get you in the right place. It's roughly two-thirds up the meters (depending on your meter scale settings) - that's all you really need to roughly aim for when recording.
There are many ways to change volume on audio in Logic, depending on needs. Different tools have different possibilities. The ability to change the gain of each note is clearly different that the possibility to only adjust gain on the region. If you wanted to adjust the gain of individual notes, using region gain is the wrong tool for this, as it doesn't allow you to do this. The more you use Logic, the more you'll get a feel for which tools are most appropriate for any given task.
Yes, gaining the audio destructively, or with audio editing tools, or with region gain all happens before that audio feeds the mixer channel. Automation controls the mixer channel, plugins and sends and faders. Automation is "driving" those mixer controls on your behalf.
The channel fader is post plugins yes, and has no affect on audio going into plugins on that channel. The mixer has a good section on the processing order of a channel, which is worth familiarising yourself with. I don't understand "fader comes after automation". Automation is just the computer moving the fader, rather than you moving the fader manually. Either way, the fader is moving, and adjusting the volume at that point in the signal path.
It's not as critical as it was back in the analog days, or when interfacing with hardware. The general principle is to record at 24-bit, not too hot and leave headroom, which will drive into plugins and your mix bus with appropriate headroom. If you constantly have to keep turning stuff down to not clip your output levels, then that's a sign you're going too far and running too hot generally.
Also, with 24-bit recording, over -18dbFS is too loud but is there a general dbFS threshold that I shouldn't go below for effective processing of vocals down the road? Now that I am trying to avoid the proximity effect and recording too hot, I tend to record very quiet. I wonder if too quiet is the reason why my audio quality is still not good enough.
Based on what you've explained, it seems that the most important place to watch the gain is the recording phase. And as long as my instruments and recordings in subsequent processing never peaks over 0dbFS, I should be good. That's all I need to care about in terms of "gainstaging" in the purely digital chain? Does it no longer matter if my recording got louder after the EQ, then went into the compression, which got slightly louder than before compression, etc.?
I don't really understand that question. If you mean, what was my peak dbFS level, then yes, the meter on your output channel tells you this, and will help you set the level how you want - there's no need to go analysing the resultant audio file afterwards.
Try this - in a fresh project, insert a Test oscillator and set it to, say, -18dBFS, and make sure it's hitting the master bus at the same level. Bounce a piece of this out, and look at the audio file in the file editor. Now you can see exactly where that level is in the file.
All you need to do is to make sure you have enough headroom, so you can sing as loud as you need, without worrying that the recording is going to clip. In the old days of 16-bit fixed point digital (eg, DAT machines and so on), this was hard - you want to be loud to maximise signal-to-noise, but not clip. With 24-bit, the level of dynamic range we have is more than we ever need, so these days it's easy.
Internally, in Logic's mixer, going over 0dBFS isn't a problem as such - you don't clip, and nothing is destroyed - you can turn it down again later is the chain without problem - Logic has somewhere around 1500dB of headroom, so going +40dBFS internally won't distort anything. But staying without typically below 0dBFS in the mixer is standard and good practice. It's just to say that, apart from input and output, in Logic's mixer you *can* go above 0dBFS without problem (this is the benefit of floating-point audio mixers, which all DAWs use for this reason.)
No - all these plugins and mixer channels have volume controls to let you control the volume as necessary. Adding a bunch of gain onto your signal is not a problem as such, only in that now your signal is way louder than it was, so you'll probably want to turn it back down again in the mix. In practice, most people use a range of tools, so if you know you're doing a whole bunch of massive EQ boosts, it can make sense just to lower the output gain of that EQ plugin there and then, to avoid a much louder signal hitting further plugins down the line (your compressor will behave differently if you hit it louder, for example) - but these things are up to you, really.
Re recording too quiet, attached is the latest I recorded. With the waveform zoom on, the waves still look tiny. I have been trying to troubleshoot from different angles to improve my audio quality and constantly ABing with well produced tracks. I wonder if too low of a recording level could be contributing to my audio quality not as clear and solid as I wanted. I'm a bit paranoid that if I recorded too close to the mic or too loud, it would have that roomy/boxy sound again, which is the biggest problem and time waster for my subsequent production process.
It's also very helpful and interesting to know of such a large headroom within modern daws. And your explanation will be a huge timesaver for me to not bother with precise gainstaging at the plugin level on every chain!
All of my questions were on the input side. Thanks for pointing out the output gain, which I also don't understand well. It's fascinating to me that if I have 2 tracks that clip, the master track doesn't clip twice as much, and that in general, the master track is not summing all of the tracks' dbFS as in the linear imagination.
- in the quiet part of my piece, neither individual tracks or master clips; in the louder parts, some vocals clip and stay clipping for a few seconds, drums and a couple of other synths occasionally peak over 0 but not sustained (which I assume is okay even for outputting from the DAW?) The mix sounds balanced.
- I have all tracks go to an aux track 39, on which I added some mixbus EQ, compression, limiting before it goes to the output. How come the aux track is clipping by 2 db for a few seconds but the master output is not clipping at all??
c01484d022