After finally getting my error corrected Ecal impedance states stored as databased cal standard, I am now able to perform error correction while measuring an ecal module and then setup a calibration config using raw measured ecal impedance states vs. the stored databased standards and then perform error correction. However, there are some issues I've noted. First my setup:
Keysight N5247B PNA-X
Keysight N4694-6003 67GHz Ecal Module
Instrument setup: 10MHz-50GHz, 300Hz IF BW, 5000 points, 0db power
Steps:
- Measure the ecal states including switch terms and save as raw measurement data
- Measure my DUT (50 ohm transmission line, 50 ohm Beatty standard)
- Create a cal config of raw impedance states referencing databased ecal states
- Perform error correction of the raw data
- (BTW: getting to this point took several tries as METAS kept crashing and exiting during either calibration computation or error correction computation.
- Visualize the error-corrected data with uncertainty information
- Export the waveforms as S2P
- Import into PLTS and compare with same DUT error-corrected using PLTS (Ecal Calibration with internal firmware) with same Ecal module under same lab ambient conditions.
I notice a few things:
- There is significantly more noise on both the 50 ohm and the beatty standard from the METAS calibrated data than PLTS/firmware calibration
- There is a greater computed rise time in the time domain plots from the METAS calibrated data than PLTS
- There is a divergence in S11 between the two but not until around 24GHz.
my obvious question is "WHY" to everything above. Why more noise? The S2P doesn't contain any uncertainty data. The noise was also evident in the visualization inside METAS, not as a result of S2P conversion. Why is there more rise time shown when the frequency content of both does not exceed 50GHz? The time domain conversions for both waveforms would be treated the same by PLTS. Why is there an impedance difference shown between the two 50 ohm traces but relatively similar between the two traces of the Beatty standard?
I'd love to get to the bottom of this since we need to correlate results from calibration with PLTS (our traditional process and BKM) with that of METAS calibration using Ecal (and also probe calibration) to keep moving forward.
Thanks!
M