LinHPSDR not working with > 7 Receivers

215 views
Skip to first unread message

mr.ri...@gmail.com

unread,
Feb 10, 2022, 11:48:26 AM2/10/22
to Hermes-Lite
Hi Group,

I was playing with LinHPSDR recently and noticed I could no longer run 8 or more receivers.

The 8th and greater receivers don't seem to get their RX frequency correctly.
So digging around in the code I found that there are now some recent commands added that over-ride the > 7 RX frequency.

1619       case 10:
1620         output_buffer[C0]=0x24;
1621         output_buffer[C1]=0x00;
1622         if(isTransmitting(radio)) {
1623           output_buffer[C1]|=0x80; // ground RX1 on transmit
1624         }
1625         output_buffer[C2]=0x00;
1626         if(radio->alex_rx_antenna==5) { // XVTR
1627           output_buffer[C2]=0x02;
1628         }
1629         output_buffer[C3]=0x00;
1630         output_buffer[C4]=0x00;
1631         break;
1632       case 11:
1633         // TX buffer size
1634         output_buffer[C0]=0x2E;
1635         output_buffer[C1]=0x0;
1636         output_buffer[C2]=0x0;
1637         output_buffer[C3]=0x4;
1638         output_buffer[C4]=0x15;
1639         break;

According to the latest USB proto doc there is a CC command at 0x24 (HL2 0x12) https://github.com/TAPR/OpenHPSDR-SVN/blob/master/Documentation/USB_protocol_V1.60.doc

C0

0 0 1 0 0 1 0 x

C1

0 0 0 0 0 0 0 0

| |

+-------------+------------ Second Alex filters – needs documenting

C2

0 0 0 0 0 0 0 0

| |

+-------------+------------ Second Alex filters – needs documenting

C3

0 0 0 0 0 0 0 0

| |

+-------------+------------ Firmware Env Gain (bits [15:8])

C4

0 0 0 0 0 0 0 0

| |

+-------------+------------ Firmware Env Gain (bits [7:0])


I don't see anything in the doc for case 11,  TX buffer size.


This probably doesn't matter to anyone that doesn't use > 7 receivers, but I thought I'd point it out that these bits used by HL2 for RX8-RX12 are being used for other things.

If you change the code to skip the last two command, > 7 receivers works again:

1642     if(current_rx==0) {
1643       command++;
1644       //N1GP if(command>11) {
1645       if(command>9) {
1646         command=1;
1647       }
1648     }


-Rick / N1GP

"Christoph v. Wüllen"

unread,
Feb 10, 2022, 12:27:15 PM2/10/22
to mr.ri...@gmail.com, herme...@googlegroups.com
a) More than 7 RX is HermesLite-II only (not in the HPSDR protocol).


b) The 0x2E command is HermesLite-II only. It should not be used
if there is no HermesLite-II. More severe, the values used there are NOT OK,
especially the "low water mark" filling of the TX buffer is too small.
I use:

case 11:
// DL1YCF: HermesLite-II only
// specify some more robust TX latency and PTT hang times
// A latency of 40 msec means that we first send two buffers
// of TX iq samples (assuming a buffer length of 1024 samples,
// that is 21 msec) before HL2 starts TXing. This should be
// enough to prevent underflows and leave some head-room
// my measurements indicate that the TX FIFO can hold about
// 75 msec or 3600 samples (cum grano salis).
output_buffer[C0]=0x2E;
output_buffer[C3]=20; // 20 msec PTT hang time, only bits 4:0
output_buffer[C4]=40; // 40 msec TX latency, only bits 6:0


c) The 0x24 command is legal but probably only relevant for orion-II boards.
But when this is completed, one should come back to case 1 (code from
pihpsdr showing the roll-back):

case 10:
//
// This is possibly only relevant for Orion-II boards
//
output_buffer[C0]=0x24;

if(isTransmitting()) {
output_buffer[C1]|=0x80; // ground RX2 on transmit, bit0-6 are Alex2 filters
}
if(receiver[0]->alex_antenna==5) { // XVTR
output_buffer[C2] |= 0x02; // Alex2 XVTR enable
}
if(transmitter->puresignal) {
output_buffer[C2] |= 0x40; // Synchronize RX5 and TX frequency on transmit (ANAN-7000)
}
//
// This was the last command defined in the HPSDR document so we
// roll back to the first command.
// The HermesLite-II uses an extended command set so in this case
// we proceed.
if (device == DEVICE_HERMES_LITE2) {
command=11;
} else {
command=1;
}
break;
> --
> You received this message because you are subscribed to the Google Groups "Hermes-Lite" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to hermes-lite...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/hermes-lite/19d9ac91-7021-4ce5-8188-340662ce1590n%40googlegroups.com.

mr.ri...@gmail.com

unread,
Feb 10, 2022, 1:11:29 PM2/10/22
to Hermes-Lite
On Thursday, February 10, 2022 at 12:27:15 PM UTC-5 "Christoph v. Wüllen" wrote:
a) More than 7 RX is HermesLite-II only (not in the HPSDR protocol).

 
True but unfortunate as the Angelia and Orion FPGAs can handle more rx slices as well as Gige.
I had put modifications in to support it, but I guess there is not much of a desire for many RX slices:

A good use case would be SkimSrv to do 8 band CW decoding:
 

b) The 0x2E command is HermesLite-II only. It should not be used
if there is no HermesLite-II. More severe, the values used there are NOT OK,
especially the "low water mark" filling of the TX buffer is too small.
I use:

case 11:
// DL1YCF: HermesLite-II only
// specify some more robust TX latency and PTT hang times
// A latency of 40 msec means that we first send two buffers
// of TX iq samples (assuming a buffer length of 1024 samples,
// that is 21 msec) before HL2 starts TXing. This should be
// enough to prevent underflows and leave some head-room
// my measurements indicate that the TX FIFO can hold about
// 75 msec or 3600 samples (cum grano salis).
output_buffer[C0]=0x2E;
output_buffer[C3]=20; // 20 msec PTT hang time, only bits 4:0
output_buffer[C4]=40; // 40 msec TX latency, only bits 6:0


OK, I see the 0x2E is OK but perhaps it should have a clause to only do it for HL2

mr.ri...@gmail.com

unread,
Feb 10, 2022, 1:19:11 PM2/10/22
to Hermes-Lite

On Thursday, February 10, 2022 at 12:27:15 PM UTC-5 "Christoph v. Wüllen" wrote:
a) More than 7 RX is HermesLite-II only (not in the HPSDR protocol).

 
True but unfortunate as the Angelia and Orion FPGAs can handle more rx slices as well as Gige.
I had put modifications in to support it, but I guess there is not much of a desire for many RX slices:

A good use case would be SkimSrv to do 8 band CW decoding:
 

b) The 0x2E command is HermesLite-II only. It should not be used
if there is no HermesLite-II. More severe, the values used there are NOT OK,
especially the "low water mark" filling of the TX buffer is too small.
I use:

case 11:
// DL1YCF: HermesLite-II only
// specify some more robust TX latency and PTT hang times
// A latency of 40 msec means that we first send two buffers
// of TX iq samples (assuming a buffer length of 1024 samples,
// that is 21 msec) before HL2 starts TXing. This should be
// enough to prevent underflows and leave some head-room
// my measurements indicate that the TX FIFO can hold about
// 75 msec or 3600 samples (cum grano salis).
output_buffer[C0]=0x2E;
output_buffer[C3]=20; // 20 msec PTT hang time, only bits 4:0
output_buffer[C4]=40; // 40 msec TX latency, only bits 6:0


OK, I see the 0x2E is OK but perhaps it should have a clause to only do it for HL2
 

Scratch that, what I mean is "case 10" will not work with HL2 > 7RX so that should be skipped if !HL2.
 

c) The 0x24 command is legal but probably only relevant for orion-II boards.
But when this is completed, one should come back to case 1 (code from
pihpsdr showing the roll-back):

case 10:
//
// This is possibly only relevant for Orion-II boards
//
output_buffer[C0]=0x24;

if(isTransmitting()) {
output_buffer[C1]|=0x80; // ground RX2 on transmit, bit0-6 are Alex2 filters
}
if(receiver[0]->alex_antenna==5) { // XVTR
output_buffer[C2] |= 0x02; // Alex2 XVTR enable
}
if(transmitter->puresignal) {
output_buffer[C2] |= 0x40; // Synchronize RX5 and TX frequency on transmit (ANAN-7000)
}
//
// This was the last command defined in the HPSDR document so we
// roll back to the first command.
// The HermesLite-II uses an extended command set so in this case
// we proceed.
if (device == DEVICE_HERMES_LITE2) {
command=11;
} else {
command=1;
}
break;



>

Matthew

unread,
Feb 11, 2022, 9:29:44 AM2/11/22
to Hermes-Lite
I would be interested to have a look at this. Can you point me to the HL2 gateware you are using?

73 Matthew M5EVT.

mr.ri...@gmail.com

unread,
Feb 11, 2022, 12:21:50 PM2/11/22
to Hermes-Lite

Matthew

unread,
Feb 11, 2022, 5:18:53 PM2/11/22
to Hermes-Lite
I have had a look at this. I am guessing from the code you post that you are using G0ORX's code. The case reset that Christoph is talking about is accounted for in my fork. However, there is no code to send NCO freq for RX8 or RX9.

You might not have seen it the HL2 wiki details the protocol and compliments to the USB protocol doc you mention. This explains output_buffer[C0]=0x2E, see ADDR 0x17 per HL2 doc:


I am surprised if you only had to modify the lines you quotes in the G0ORX code. For example, the maximum number of receivers is coded to 8, see:


I am curious what you want to do with the 10 receivers? I have found that with > 4 RXs in linHPSDR the performance really suffers to the point it is not unusable. This is related to the waterfall display update code (I believe I tracked this down to some sort of blocking call within GetPixels in the update_receiver function). If you set the FPS on each RX to 1 FPS you can squeeze a bit more out. However, I have just modified some code in my fork and run some tests (with the gateware you link), even with 1 FPS per RX and the number of pixels on each waterfall at min settings, it was missing processing a lot of packets (in my fork I report via the command line the EP6 sequence number linHPSDR thought it was getting vs the number it received in the packet).

In the past I have wondered about an option to not display the waterfall for lots of receivers. This might work, but if you then want to connect 10 pulse audio sinks to digi mode software, there maybe another wall to hit...

73 Matthew M5EVT.

mr.ri...@gmail.com

unread,
Feb 12, 2022, 10:36:41 AM2/12/22
to Hermes-Lite
On Friday, February 11, 2022 at 5:18:53 PM UTC-5 Matthew wrote:
I have had a look at this. I am guessing from the code you post that you are using G0ORX's code. The case reset that Christoph is talking about is accounted for in my fork. However, there is no code to send NCO freq for RX8 or RX9.


Yes G0ORX's code.
 
You might not have seen it the HL2 wiki details the protocol and compliments to the USB protocol doc you mention. This explains output_buffer[C0]=0x2E, see ADDR 0x17 per HL2 doc:


I am surprised if you only had to modify the lines you quotes in the G0ORX code. For example, the maximum number of receivers is coded to 8, see:


 Here are the changes I made to get the 8RX working, for some not yet found issue > 8RX doesn't work
(even changing radio.h to support 10): 
#define MAX_RECEIVERS 10
...
#define TRANSMITTER_CHANNEL 10
#define WIDEBAND_CHANNEL 11
#define BPSK_CHANNEL 12

8RX OK:
diff --git a/protocol1.c b/protocol1.c
index 4918111..1330b8f 100644
--- a/protocol1.c
+++ b/protocol1.c
@@ -1219,7 +1219,7 @@ void ozy_send_buffer() {
         nreceivers=radio->receivers;
 #endif
         if(current_rx<radio->discovered->supported_receivers) {
-          output_buffer[C0]=0x04+(current_rx*2);
+          output_buffer[C0]=(current_rx < 7)?(current_rx+2)<<1:(current_rx+11)<<1;
 #ifdef PURESIGNAL
           int v=receiver[current_rx/2]->id;
           if(isTransmitting(radio) && radio->transmitter->puresignal) {
@@ -1615,7 +1615,7 @@ void ozy_send_buffer() {
 
     if(current_rx==0) {
       command++;
-      if(command>11) {
+      if(command>9) {
         command=1;

       }
     } 

I am curious what you want to do with the 10 receivers? I have found that with > 4 RXs in linHPSDR the performance really suffers to the point it is not unusable. This is related to the waterfall display update code (I believe I tracked this down to some sort of blocking call within GetPixels in the update_receiver function). If you set the FPS on each RX to 1 FPS you can squeeze a bit more out. However, I have just modified some code in my fork and run some tests (with the gateware you link), even with 1 FPS per RX and the number of pixels on each waterfall at min settings, it was missing processing a lot of packets (in my fork I report via the command line the EP6 sequence number linHPSDR thought it was getting vs the number it received in the packet).


For me basically to test FPGA changes.
 
In the past I have wondered about an option to not display the waterfall for lots of receivers. This might work, but if you then want to connect 10 pulse audio sinks to digi mode software, there maybe another wall to hit...

73 Matthew M5EVT.



On Friday, February 11, 2022 at 9:29:44 AM UTC-5 Matthew wrote:
I would be interested to have a look at this. Can you point me to the HL2 gateware you are using?

73 Matthew M5EVT.

Matthew

unread,
Feb 13, 2022, 5:25:57 AM2/13/22
to Hermes-Lite
Understood Rick,

Are you also sending the NCO freq to RX > 8 to (HL2) ADDR 0x12, 0x13 etc. (see HL2 protocol wiki and not the P1 doc you reference above)? I've just had a look at the code I modified and I have nothing more to add to your diff above. But I know I wasn't sending NCO freq to RX>7 so I wasn't expecting them to work.

In case you aren't aware of this, SparkSDR works for mac and linux and when I checked on Friday, seemed to support the 10 RX gateware. However, if you want to modify the source code and add debug statements for testing FPGA gatware, you can't do this with SparkSDR.

73 Matthew M5EVT.

Steve Haynal

unread,
Feb 13, 2022, 3:06:37 PM2/13/22
to Hermes-Lite
Hi Rick,

Just a little bit of background on the "more than 7" receiver extension. This was created back in the days of the Hermes-Lite 1 when one of the FPGA sticks we were using had a very large FPGA. At one point we had 32 receivers running. With the Hermes-Lite 2, I scaled this back to a max of 12 receivers. This was all done before conflicting assignments were added to the openhpsdr protocol 1. In hindsight, there could have been better coordination with adding extensions to protocol 1, but protocol 1 was dead at the time in favor of protocol 2. The Red Pitaya also uses the 7+ receiver extension, so there may actually be more users of this extension than the conflicting recent protocol 1 addition.

The HL2 is able to squeeze more receivers into a smaller FPGA than openhpsdr for several reasons. First, we run at a slower sampling rate (no 6M) so can double the speed of most DSP blocks, effectively time multiplexing them. Second, the HL2 uses a more resource efficient NCO, given the available multipliers and memories, for the LO instead of a LUT-costly Cordic.

Finally, the HL2 is gigabit. This is required for running 10 receivers at 192kHz for SparkSDR.

73,

Steve
kf7o

mr.ri...@gmail.com

unread,
Feb 13, 2022, 4:52:00 PM2/13/22
to Hermes-Lite
Hi Steve & Matthew,

Steve,
yes I remember that stick with the big FPGA, I have a couple of them still.
I also recall you converting over to the NCO -vs- the Cordic to be more resource efficient.

Question, if the sampling rate was greater (6M included) could you still 'double the speed of most DSP blocks'
or would that then be out of range for the FPGA's capabilities?

Matthew,
Yes, the 'sending the NCO freq to RX > 8' is done via: 
-          output_buffer[C0]=0x04+(current_rx*2);
+          output_buffer[C0]=(current_rx < 7)?(current_rx+2)<<1:(current_rx+11)<<1;

Thanks for suggestion of SparkSDR, yes I use it frequently and it works well with the 10 RX gateware for me.
But as you say, I cannot modify it for test cases.

So I have found another area where > 8 RX is not implemented. This time I used your git repo
to test and modify (the diff below is for master branch,  https://github.com/m5evt/linhpsdr.git)

I was able to get the 10 RX CIC gateware working with it with the following modifications.
Note I added -fcommon to the Makefile to get passed a build error:
/usr/bin/ld: radio_dialog.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: multiple definition of `cwd_changed_at'; transmitter.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: first defined here
/usr/bin/ld: cwdaemon.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: multiple definition of `cwd_changed_at'; transmitter.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: first defined here
/usr/bin/ld: midi3.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: multiple definition of `cwd_changed_at'; transmitter.o:/home/mh/Desktop/SDR/HPSDR/PIHPSDR/linhpsdr/DEL/linhpsdr_work/cwdaemon.h:71: first defined here

Also when running many receivers I started getting many:
EP6 ERROR packet 1455 pc 669
EP6 ERROR packet 14471 pc 14447
EP6 ERROR packet 18079 pc 18055
EP6 ERROR packet 21675 pc 21663
EP6 ERROR packet 22901 pc 22891

So I increased the SO_RCVBUF size to 2MB. This seemed to fix that issue.

Next, the area that needed to be updated for more receivers is transmitter.c
The protocol1_tx_scheduler was only implemented for up to 7 receivers. I added more
to the arrays for up to 12 receivers.

Here is the diff:

===================================

diff --git a/Makefile b/Makefile
index 260a67f..366fe12 100644
--- a/Makefile
+++ b/Makefile
@@ -6,7 +6,7 @@ UNAME_S := $(shell uname -s)
 GIT_DATE := $(firstword $(shell git --no-pager show --date=short --format="%ai" --name-only))
 GIT_VERSION := $(shell git describe --abbrev=0 --tags)
 
-CC=gcc
+CC=gcc -fcommon
 LINK=gcc
 
 GTKINCLUDES=`pkg-config --cflags gtk+-3.0`
diff --git a/protocol1.c b/protocol1.c
index 1f35dac..07ec9be 100644
--- a/protocol1.c
+++ b/protocol1.c
@@ -330,6 +330,10 @@ static void start_protocol1_thread() {
         exit(-1);
       }
 
+      // N1GP, 'netstat -anus', shows '10892895 receive buffer errors' and it increments quickly when RX>4 and samplerate>96K
+      // also see many SEQ errors, with a 2MB buffer this goes away
+      int size = 8 * 1024 * 1024;
+      setsockopt(data_socket, SOL_SOCKET, SO_RCVBUF, &size, (socklen_t)sizeof(int));
       int optval = 1;
       if(setsockopt(data_socket, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof(optval))<0) {
         perror("data_socket: SO_REUSEADDR");
@@ -1080,7 +1084,7 @@ void ozy_send_buffer() {
       case 2: // rx frequency
 //        nreceivers=radio->receivers;

         if(current_rx<radio->discovered->supported_receivers) {
-          output_buffer[C0]=0x04+(current_rx*2);
+          output_buffer[C0]=(current_rx < 7)?(current_rx+2)<<1:(current_rx+11)<<1;
 #ifdef PURESIGNAL
           if (isTransmitting(radio) && (radio->transmitter->puresignal != NULL)
              && ((current_rx == radio->discovered->ps_tx_fdbk_chan)
@@ -1482,7 +1486,10 @@ void ozy_send_buffer() {
     if(current_rx==0) {
       command++;
       if (radio->discovered->device==DEVICE_HERMES_LITE2) {
-        if (command>11) {
+        if (command==10) { // skip 10
+          command=11;
+        }
+        else if (command>11) {
           command=1;
         }
       }
diff --git a/radio.c b/radio.c
index 54c4dd3..05247b7 100644
--- a/radio.c
+++ b/radio.c
@@ -289,7 +289,7 @@ g_print("radio_save_state: %s\n",filename);
   filterSaveState();
   bandSaveState();
 
-  for(i=0;i<radio->discovered->supported_receivers;i++) {
+  for(i=0;i<MAX_RECEIVERS&&i<radio->discovered->supported_receivers;i++) {
     if(radio->receiver[i]!=NULL) {
       receiver_save_state(radio->receiver[i]);
     }
diff --git a/radio.h b/radio.h
index a2d03c2..4c3db7b 100644
--- a/radio.h
+++ b/radio.h
@@ -20,14 +20,14 @@
 #ifndef RADIO_H
 #define RADIO_H
 
-#define MAX_RECEIVERS 8
+#define MAX_RECEIVERS 10
 #define MAX_DIVERSITY_MIXERS 2
 
 #define MAX_BUFFER_SIZE 2048
 
-#define TRANSMITTER_CHANNEL 8
-#define WIDEBAND_CHANNEL 9
-#define BPSK_CHANNEL 10
+#define TRANSMITTER_CHANNEL 10
+#define WIDEBAND_CHANNEL 11
+#define BPSK_CHANNEL 12
 
 #include "diversity_mixer.h"
 #include "hl2.h"
diff --git a/transmitter.c b/transmitter.c
index 341749e..f29cfe1 100644
--- a/transmitter.c
+++ b/transmitter.c
@@ -468,7 +468,7 @@ const double cw_waveform[CW_WAVEFORM_SAMPLES] = {
 1.0
 };
 
-const int protocol1_tx_scheduler[20][26] = {
+const int protocol1_tx_scheduler[40][26] = {
 //  Three receivers - 25 Tx queue events per set of 63, 126, 252, 504  received frames
 {0, 3, 5, 8, 10, 13, 15, 18, 20, 23, 25, 28, 30, 33, 35, 38, 40, 43, 45, 48, 50, 53, 55, 58, 60, -1}, // 25 frames per 63
 {0, 6, 10, 16, 20, 26, 30, 36, 40, 46, 50, 56, 60, 66, 70, 76, 80, 86, 90, 96, 100, 106, 110, 116, 120, -1}, // 25 frames per 126
@@ -493,7 +493,32 @@ const int protocol1_tx_scheduler[20][26] = {
 {6, 12, 17, 23, 29, 34, 40, 45, 51, 57, 62, -1, -1, -1, -1, -1, -1, -1 , -1, -1, -1, -1, -1, -1, -1, -1}, // 11 frames per 63
 {12, 24, 34, 46, 58, 68, 80, 90, 102, 114, 124, -1, -1, -1, -1, -1, -1, -1 , -1, -1, -1, -1, -1, -1, -1, -1}, // 11 frames per 126
 {24, 48, 68, 92, 116, 136, 160, 180, 204, 228, 248, -1, -1, -1, -1, -1, -1, -1 , -1, -1, -1, -1, -1, -1, -1, -1},  // 11 frames per 252
-{48, 96, 136, 184, 232, 272, 320, 360, 408, 456, 496, -1, -1, -1, -1, -1, -1, -1 , -1, -1, -1, -1, -1, -1, -1, -1}};  // 11 frames per 504  
+{48, 96, 136, 184, 232, 272, 320, 360, 408, 456, 496, -1, -1, -1, -1, -1, -1, -1 , -1, -1, -1, -1, -1, -1, -1, -1},  // 11 frames per 504  
+// 8 receivers - 10 Tx queue events per set of 63, 126, 252, 504  received frames
+{6, 13, 19, 25, 32, 38, 44, 50, 57, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 63
+{13, 25, 38, 50, 63, 76, 88, 101, 113, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 126
+{25, 50, 76, 101, 126, 151, 176, 202, 227, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 252
+{50, 101, 151, 202, 252, 302, 353, 403, 454, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 504
+// 9 receivers - 9 Tx queue events per set of 63, 126, 252, 504  received frames
+{7, 14, 21, 28, 35, 42, 49, 56, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 63
+{14, 28, 42, 56, 70, 84, 98, 112, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 126
+{28, 56, 84, 112, 140, 168, 196, 224, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 252
+{56, 112, 168, 224, 280, 336, 392, 448, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 504
+// 10 receivers - 8 Tx queue events per set of 63, 126, 252, 504  received frames
+{8, 16, 24, 32, 39, 47, 55, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 63
+{16, 32, 47, 63, 79, 94, 110, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 126
+{32, 63, 94, 126, 158, 189, 220, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 252
+{63, 126, 189, 252, 315, 378, 441, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 504
+// 11 receivers - 7 Tx queue events per set of 63, 126, 252, 504  received frames
+{9, 18, 27, 36, 45, 54, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 63
+{18, 36, 54, 72, 90, 108, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 126
+{36, 72, 108, 144, 180, 216, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 252
+{72, 144, 216, 288, 360, 432, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 504
+// 12 receivers - 6 Tx queue events per set of 63, 126, 252, 504  received frames
+{10, 21, 32, 42, 52, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 63
+{21, 42, 63, 84, 105, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 126
+{42, 84, 126, 168, 210, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 252
+{84, 168, 252, 336, 420, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} }; // 6 frames per 504
 
 void transmitter_save_state(TRANSMITTER *tx) {
   char name[80];


===================================

Here is a full 10 RX on my AMD Ryzen 7 2700X Eight-Core Processor Linux system, @192K CPU loading is not bad:

linhpsdr_rx10.png

-Rick / N1GP

mr.ri...@gmail.com

unread,
Feb 13, 2022, 5:41:35 PM2/13/22
to Hermes-Lite

// Tx packet schedule synched to the rx packets
// Credit to N5EG for most most of the code
// https://github.com/Tom-McDermott/gr-hpsdr/blob/master/lib/HermesProxy.cc

I realize N5EG came up with the array tables. I looked at his code and didn't find any formulas 
for how he got all the values although there was enough there to infer how to get the rest.


I put together a C program to get the values which are part of the diff in prev post
although the data may not be correct and probably should be run past N5EG:

// Program to calculate Tx queue events per set of 63, 126, 252, 504  received frames.
//
// Rick / N1GP


#include <stdio.h>
#include <stdlib.h>
#include <math.h>

#define PKT_SIZE 512
#define MAXRCVRS 12

void main (void)
{
        int c, i, j, k, l, p;
        float f;

        c = 1;
        while (c < MAXRCVRS+1)
        {
                for (i=8,k=0; i < PKT_SIZE;)
                {
                        while (k < c)
                        {
                                i += 6;
                                k += 1;
                        }
                        k = 0;
                        i += 2;
                        if (PKT_SIZE-i >= 0) p = PKT_SIZE-i;
                }
                l = (PKT_SIZE - 8)/(8 + 6 * (c - 1));
                /* c = number of receivers
                   l = number of frames
                */
                printf("// %d receivers - %d Tx queue events per set of 63, 126, 252, 504  received frames\n", c, l);
                for (j=63; j<=504; j<<=1)
                {
                        f=(float)j/l;
                        printf("{");
                        for (i=1; i<=26; i++)
                        {
                                printf("%d%c ", (i<=l)?(int)rint(f*i):-1, (i!=26)?',':'}');
                        }
                        printf("%s", (j==504 && c==MAXRCVRS)?"};":",");
                        printf(" // %d frames per %d\n", l, j);
                }
                c++;
        }
}

$ ./nrcvrs
// 1 receivers - 63 Tx queue events per set of 63, 126, 252, 504  received frames
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26} , // 63 frames per 63
{2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52} , // 63 frames per 126
{4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104} , // 63 frames per 252
{8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208} , // 63 frames per 504
// 2 receivers - 36 Tx queue events per set of 63, 126, 252, 504  received frames
{2, 4, 5, 7, 9, 10, 12, 14, 16, 18, 19, 21, 23, 24, 26, 28, 30, 32, 33, 35, 37, 38, 40, 42, 44, 46} , // 36 frames per 63
{4, 7, 10, 14, 18, 21, 24, 28, 32, 35, 38, 42, 46, 49, 52, 56, 60, 63, 66, 70, 74, 77, 80, 84, 88, 91} , // 36 frames per 126
{7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98, 105, 112, 119, 126, 133, 140, 147, 154, 161, 168, 175, 182} , // 36 frames per 252
{14, 28, 42, 56, 70, 84, 98, 112, 126, 140, 154, 168, 182, 196, 210, 224, 238, 252, 266, 280, 294, 308, 322, 336, 350, 364} , // 36 frames per 504
// 3 receivers - 25 Tx queue events per set of 63, 126, 252, 504  received frames
{3, 5, 8, 10, 13, 15, 18, 20, 23, 25, 28, 30, 33, 35, 38, 40, 43, 45, 48, 50, 53, 55, 58, 60, 63, -1} , // 25 frames per 63
{5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 66, 71, 76, 81, 86, 91, 96, 101, 106, 111, 116, 121, 126, -1} , // 25 frames per 126
{10, 20, 30, 40, 50, 60, 71, 81, 91, 101, 111, 121, 131, 141, 151, 161, 171, 181, 192, 202, 212, 222, 232, 242, 252, -1} , // 25 frames per 252
{20, 40, 60, 81, 101, 121, 141, 161, 181, 202, 222, 242, 262, 282, 302, 323, 343, 363, 383, 403, 423, 444, 464, 484, 504, -1} , // 25 frames per 504
// 4 receivers - 19 Tx queue events per set of 63, 126, 252, 504  received frames
{3, 7, 10, 13, 17, 20, 23, 27, 30, 33, 36, 40, 43, 46, 50, 53, 56, 60, 63, -1, -1, -1, -1, -1, -1, -1} , // 19 frames per 63
{7, 13, 20, 27, 33, 40, 46, 53, 60, 66, 73, 80, 86, 93, 99, 106, 113, 119, 126, -1, -1, -1, -1, -1, -1, -1} , // 19 frames per 126
{13, 27, 40, 53, 66, 80, 93, 106, 119, 133, 146, 159, 172, 186, 199, 212, 225, 239, 252, -1, -1, -1, -1, -1, -1, -1} , // 19 frames per 252
{27, 53, 80, 106, 133, 159, 186, 212, 239, 265, 292, 318, 345, 371, 398, 424, 451, 477, 504, -1, -1, -1, -1, -1, -1, -1} , // 19 frames per 504
// 5 receivers - 15 Tx queue events per set of 63, 126, 252, 504  received frames
{4, 8, 13, 17, 21, 25, 29, 34, 38, 42, 46, 50, 55, 59, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 15 frames per 63
{8, 17, 25, 34, 42, 50, 59, 67, 76, 84, 92, 101, 109, 118, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 15 frames per 126
{17, 34, 50, 67, 84, 101, 118, 134, 151, 168, 185, 202, 218, 235, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 15 frames per 252
{34, 67, 101, 134, 168, 202, 235, 269, 302, 336, 370, 403, 437, 470, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 15 frames per 504
// 6 receivers - 13 Tx queue events per set of 63, 126, 252, 504  received frames
{5, 10, 15, 19, 24, 29, 34, 39, 44, 48, 53, 58, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 13 frames per 63
{10, 19, 29, 39, 48, 58, 68, 78, 87, 97, 107, 116, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 13 frames per 126
{19, 39, 58, 78, 97, 116, 136, 155, 174, 194, 213, 233, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 13 frames per 252
{39, 78, 116, 155, 194, 233, 271, 310, 349, 388, 426, 465, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 13 frames per 504
// 7 receivers - 11 Tx queue events per set of 63, 126, 252, 504  received frames
{6, 11, 17, 23, 29, 34, 40, 46, 52, 57, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 11 frames per 63
{11, 23, 34, 46, 57, 69, 80, 92, 103, 115, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 11 frames per 126
{23, 46, 69, 92, 115, 137, 160, 183, 206, 229, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 11 frames per 252
{46, 92, 137, 183, 229, 275, 321, 367, 412, 458, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 11 frames per 504

// 8 receivers - 10 Tx queue events per set of 63, 126, 252, 504  received frames
{6, 13, 19, 25, 32, 38, 44, 50, 57, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 63
{13, 25, 38, 50, 63, 76, 88, 101, 113, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 126
{25, 50, 76, 101, 126, 151, 176, 202, 227, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 252
{50, 101, 151, 202, 252, 302, 353, 403, 454, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 10 frames per 504
// 9 receivers - 9 Tx queue events per set of 63, 126, 252, 504  received frames
{7, 14, 21, 28, 35, 42, 49, 56, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 63
{14, 28, 42, 56, 70, 84, 98, 112, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 126
{28, 56, 84, 112, 140, 168, 196, 224, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 252
{56, 112, 168, 224, 280, 336, 392, 448, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 9 frames per 504
// 10 receivers - 8 Tx queue events per set of 63, 126, 252, 504  received frames
{8, 16, 24, 32, 39, 47, 55, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 63
{16, 32, 47, 63, 79, 94, 110, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 126
{32, 63, 94, 126, 158, 189, 220, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 252
{63, 126, 189, 252, 315, 378, 441, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 8 frames per 504
// 11 receivers - 7 Tx queue events per set of 63, 126, 252, 504  received frames
{9, 18, 27, 36, 45, 54, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 63
{18, 36, 54, 72, 90, 108, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 126
{36, 72, 108, 144, 180, 216, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 252
{72, 144, 216, 288, 360, 432, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 7 frames per 504
// 12 receivers - 6 Tx queue events per set of 63, 126, 252, 504  received frames
{10, 21, 32, 42, 52, 63, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 63
{21, 42, 63, 84, 105, 126, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 126
{42, 84, 126, 168, 210, 252, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} , // 6 frames per 252
{84, 168, 252, 336, 420, 504, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} }; // 6 frames per 504

Steve Haynal

unread,
Feb 13, 2022, 11:26:31 PM2/13/22
to Hermes-Lite
Hi Rick,

The final FIR filter optimizations I made (do IQ in series instead of parallel) could probably work for radios with 6M as the FIR filter rate at that point is 8X the final output rate. But the NCOs must run at the maximum sampling which is 122.88MHz for openhpsdr radios with 6M. Running the NCOs at twice this frequency is pretty much impossible with the Cyclone 4 technology.

73,

Steve
kf7o

mr.ri...@gmail.com

unread,
Feb 19, 2022, 2:02:59 PM2/19/22
to Hermes-Lite
Hi Steve,

Yes, I was curious if the NCOs could run @ twice 122.88MHz for openhpsdr radios.
Tnx for the clarification.

-Rick / N1GP
Reply all
Reply to author
Forward
0 new messages