Hi Steve,
I gave my setup a good test over the weekend making around 150 qsos to test my linhpsdr(cwx+cwdaemon)+HL2(gateware 20200329_70p0) setup in the weekend's cw contest. I've found a few software niggles to work on for linhpsdr.
I was keeping a close eye on my command line debugging for linhpsdr. Early in the contest I noticed some HL2 TX buffer issues with break-in keying. To be safe, I decided to increase the CW hang time (around 400 ms @ 30 wpm). However, because of this I was missing the start of some exchanges and when some people mistake me for W5 instead of M5, I need to hear that first character! I have been looking at this today.
I've read your post below a couple of times and feel I have got to grips with the detail. I have the latency set for 21 ms. The following debug is all based on reading control bits straight from the HL2 packet (1 rx @ 48000, I confirm in wireshark rx:tx packet ratio is always 1:1):
C[3] = 0x8f Buffer recovery 0 1
C[3] = 0x1f
process_control_bytes: ppt=1 dot=0 dash=0
C[3] = 0x2f
C[3] = 0x3d
C[3] = 0x3d
C[3] = 0x3d
...
C[3] = 0x3d
process_control_bytes: ppt=0 dot=0 dash=0
C[3] = 0x3d
C[3] = 0x3d
C[3] = 0x3d
C[3] = 0x3d
C[3] = 0x46
C[3] = 0x56
C[3] = 0x66
process_control_bytes: ppt=1 dot=0 dash=0
C[3] = 0x76
C[3] = 0xf8 Buffer recovery 1 1
C[3] = 0xe9 Buffer recovery 1 1
C[3] = 0xd9 Buffer recovery 1 1
C[3] = 0xc9 Buffer recovery 1 1
C[3] = 0xb9 Buffer recovery 0 1
C[3] = 0xaa Buffer recovery 0 1
C[3] = 0x9a Buffer recovery 0 1
C[3] = 0x9a Buffer recovery 0 1
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1a
...
C[3] = 0x1a
process_control_bytes: ppt=0 dot=0 dash=0
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1a
C[3] = 0x1f
C[3] = 0x2f
C[3] = 0x3e
process_control_bytes: ppt=1 dot=0 dash=0
C[3] = 0x4e
C[3] = 0x57
C[3] = 0x57
C[3] = 0x57
C[3] = 0x57
C[3] = 0x57
Note, PTT is off and the latency increases before the PTT comes back on (cwx bit set in TX packet). Our TX buffer level is now 0x57.
In the following condition, the FIFO stable at C[3] = 0x1f before PTT=0:
C[3] = 0x1f
C[3] = 0x1f
process_control_bytes: ppt=0 dot=0 dash=0
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x17
C[3] = 0x7
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x1f
C[3] = 0x17
C[3] = 0x7
C[3] = 0x80 Buffer recovery 0 1
C[3] = 0x80 Buffer recovery 0 1
I was trying to figure out if I could detect the latency changes based on knowing when I sent the TX packet and when the TX is keyed. However, I couldn't figure out how in any of the information I get back from the HL2, that the TX is keyed (rather than PTT engaged). Perhaps this is to avoid breaking PowerSDR? (I don't really think this information is useful beyond for debugging?)
I hope this is clear. Am I doing something wrong in my configuration bits I am sending or is this a feature (i.e. just ignore this warning?) or a bug?
73 Matthew M5EVT.