FreeDV on 70cm

627 views
Skip to first unread message

Remco Post

unread,
Dec 23, 2013, 6:50:44 PM12/23/13
to digita...@googlegroups.com
Hi all,

a lot happened today. A fellow ham and journalist John Piek, PA0ETE, uploaded a video I made about setting up FreeDV on the Mac with the IC-7100 and mailed that link to the subscribers on his QRM mailing list.

That caused Rob, PA3CNT, who lives just a few kilometers from here to start experimenting with FreeDV and eventually we made a FreeDV QSO on 70cm. I blogged (or should that be bragged? ;-)) about that and that led to another QSO in FreeDV on 70cm, this time with Lucas, PD0LVS. (see http://www.pipsworld.nl/wiki/pages/m4H6p3U5/First_in_FreeDV.html for the blog posting).

Both Rob and Lucas were new to FreeDV, so having both of them experience it in itself is a good thing.

There was a lot of 'R2D2’ in the audio both ways, even though the signal strengths were way up there, so I guess there is some more tinkering to be done to get the audio quality as good as I could be, but for now I very happy with the results of just one evening of playing with FreeDV!

— 

73 de Remco Post, PE1PIP




Tony Langdon

unread,
Dec 23, 2013, 7:43:06 PM12/23/13
to digita...@googlegroups.com
On 24/12/13 10:50 AM, Remco Post wrote:
Hi all,

a lot happened today. A fellow ham and journalist John Piek, PA0ETE, uploaded a video I made about setting up FreeDV on the Mac with the IC-7100 and mailed that link to the subscribers on his QRM mailing list.

That caused Rob, PA3CNT, who lives just a few kilometers from here to start experimenting with FreeDV and eventually we made a FreeDV QSO on 70cm. I blogged (or should that be bragged? ;-)) about that and that led to another QSO in FreeDV on 70cm, this time with Lucas, PD0LVS. (see�http://www.pipsworld.nl/wiki/pages/m4H6p3U5/First_in_FreeDV.html for the blog posting).

Both Rob and Lucas were new to FreeDV, so having both of them experience it in itself is a good thing.

There was a lot of 'R2D2� in the audio both ways, even though the signal strengths were way up there, so I guess there is some more tinkering to be done to get the audio quality as good as I could be, but for now I very happy with the results of just one evening of playing with FreeDV!
Just curious, was this using FM or SSB as the underlying modulation?

-- 
73 de Tony VK3JED/VK3IRL
http://vkradio.com

Andrew O'Brien

unread,
Dec 23, 2013, 8:03:46 PM12/23/13
to digita...@googlegroups.com
and why on 70cm ?  Does it perform better than a FM voice signal ?
Andy K3UK


On Mon, Dec 23, 2013 at 4:43 PM, Tony Langdon <vk3...@gmail.com> wrote:
On 24/12/13 10:50 AM, Remco Post wrote:
Hi all,

a lot happened today. A fellow ham and journalist John Piek, PA0ETE, uploaded a video I made about setting up FreeDV on the Mac with the IC-7100 and mailed that link to the subscribers on his QRM mailing list.

That caused Rob, PA3CNT, who lives just a few kilometers from here to start experimenting with FreeDV and eventually we made a FreeDV QSO on 70cm. I blogged (or should that be bragged? ;-)) about that and that led to another QSO in FreeDV on 70cm, this time with Lucas, PD0LVS. (see http://www.pipsworld.nl/wiki/pages/m4H6p3U5/First_in_FreeDV.html for the blog posting).

Both Rob and Lucas were new to FreeDV, so having both of them experience it in itself is a good thing.

There was a lot of 'R2D2’ in the audio both ways, even though the signal strengths were way up there, so I guess there is some more tinkering to be done to get the audio quality as good as I could be, but for now I very happy with the results of just one evening of playing with FreeDV!
Just curious, was this using FM or SSB as the underlying modulation?

-- 
73 de Tony VK3JED/VK3IRL
http://vkradio.com

--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.

Tony Langdon

unread,
Dec 23, 2013, 8:12:40 PM12/23/13
to digita...@googlegroups.com
On 24/12/13 12:03 PM, Andrew O'Brien wrote:
> and why on 70cm ? Does it perform better than a FM voice signal ?
I'd like to compare the following:

FM voice
SSB voice
FreeDV over FM
FreeDV over SSB
FreeDV-GMSK (when available)
Message has been deleted

Remco Post

unread,
Dec 24, 2013, 12:51:53 AM12/24/13
to digita...@googlegroups.com
Op 24 dec. 2013, om 01:43 heeft Tony Langdon <vk3...@gmail.com> het volgende geschreven:

On 24/12/13 10:50 AM, Remco Post wrote:
Hi all,

a lot happened today. A fellow ham and journalist John Piek, PA0ETE, uploaded a video I made about setting up FreeDV on the Mac with the IC-7100 and mailed that link to the subscribers on his QRM mailing list.

That caused Rob, PA3CNT, who lives just a few kilometers from here to start experimenting with FreeDV and eventually we made a FreeDV QSO on 70cm. I blogged (or should that be bragged? ;-)) about that and that led to another QSO in FreeDV on 70cm, this time with Lucas, PD0LVS. (see http://www.pipsworld.nl/wiki/pages/m4H6p3U5/First_in_FreeDV.html for the blog posting).

Both Rob and Lucas were new to FreeDV, so having both of them experience it in itself is a good thing.

There was a lot of 'R2D2’ in the audio both ways, even though the signal strengths were way up there, so I guess there is some more tinkering to be done to get the audio quality as good as I could be, but for now I very happy with the results of just one evening of playing with FreeDV!
Just curious, was this using FM or SSB as the underlying modulation?


Hi Tony,

this was using SSB.

-- 
73 de Tony VK3JED/VK3IRL
http://vkradio.com

--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.

Remco Post

unread,
Dec 24, 2013, 1:07:14 AM12/24/13
to digita...@googlegroups.com
Op 24 dec. 2013, om 02:03 heeft Andrew O'Brien <k3uk...@gmail.com> het volgende geschreven:

and why on 70cm ?  Does it perform better than a FM voice signal ?

Hi Andy,

we wanted to experiment with FreeDV and I only have a Diamond V2000 antenna (6m, 2m, 70cm). Rob has no suitable antenna for 6m so we decided on 70cm.

For the distance FM does perform better. Rob is in the grid square next to mine, it’s about 6.5km… As for Lucas, he is a bit further away (about 36km) but both he and I have an antenne at quite a good hight, considering we both live in the city and all (mine is at about 25m ASL). I think a direct QSO in FM would be a bit noisy, but would definitely not be impossible (we used the pianos repeater as a back channel).

Remco Post

unread,
Dec 24, 2013, 1:17:35 AM12/24/13
to digita...@googlegroups.com
Op 24 dec. 2013, om 02:12 heeft Tony Langdon <vk3...@gmail.com> het volgende geschreven:

On 24/12/13 12:03 PM, Andrew O'Brien wrote:
and why on 70cm ?  Does it perform better than a FM voice signal ?
I'd like to compare the following:


Hi Tony,

FM voice
SSB voice
FreeDV over FM
FreeDV over SSB
FreeDV-GMSK (when available)

I think for now, the „R2D2” would make FreeDV sound less good than even SSB voice for the QSO’s that we made yesterday. I don’t know what causes the effect, the samples on Dave Rowe’s blog make me think we could achieve at least similar quality. Of course, given the distance, Rob and I could have opened a window and shouted to each other ;-)


--
73 de Tony VK3JED/VK3IRL
http://vkradio.com

--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.
Message has been deleted

John W.

unread,
Dec 25, 2013, 11:11:38 AM12/25/13
to digita...@googlegroups.com
We sooo badly need an FM version! This would allow more people to
experiment locally and be a true competition to D-STAR

Bruce Perens

unread,
Dec 25, 2013, 2:50:15 PM12/25/13
to digita...@googlegroups.com, John W.
The SSB version you presently have works just fine over an FM HT. It's just a demo, though, in that it doesn't really give you any additional capability over FM voice.

With GMSK, codec2 can get 2 kHz bandwidth. Consider that our conventional FM with 5 kHz deviation is something over 10 kHz bandwidth. So, we get at least a 7 dB power efficiency improvement without considering whether we can capture our signal with its error correction better than an FM detector can. And this means that you can use lower power to get the same range as FM, or get greater range with the same power. We also get a 4 or 5 times increase in the number of possible channels. But this is going to require some tricks, like locking frequency to fixed stations (repeaters or relays) because mobile crystals aren't that stable.

Thanks

Bruce


"John W." <kd8...@gmail.com> wrote:
We sooo badly need an FM version!  This would allow more people to 
experiment locally and be a true competition to D-STAR

--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Reuven Z Gevaryahu

unread,
Dec 26, 2013, 11:01:27 AM12/26/13
to digita...@googlegroups.com
In my area (FM29/FN20), we used FreeDV on our local digital modes net on an FM repeater. It worked reasonably, but some people were more readable than others. Sometimes it was a matter of preemph/deemph differences causing too high an error rate, and sometimes it was a matter of low mic gain (especially built-in laptop mics) causing the codec to treat some words as background noise. Some of the participants then followed up on VHF SSB simplex, which apparently worked better. (I was too far from them to participate with my modest antenna). But in short, it works on VHF SSB, FM and even FM repeaters pretty well.

There were a few complaints about UI oddities, but the most frequently encountered issues among the net participants were around the sound configuration. Everybody had different sound devices and PTT methods, and we needed to walk folks through how to set things up for signalinks, how to set people up for rigblasters, etc. But some of the folks can now be spotted on 14.236...

--Reuven (KB3EHW)

Steve

unread,
Dec 26, 2013, 12:48:25 PM12/26/13
to digita...@googlegroups.com, John W.
On Wednesday, December 25, 2013 1:50:15 PM UTC-6, Bruce Perens wrote:
...But this is going to require some tricks, like locking frequency to fixed stations (repeaters or relays) because mobile crystals aren't that stable.


A possible method is to transmit a BPSK preamble (pilot) either continuously, or at the start of the transmission.  This provides a mixing frequency to offset the error of each different radio in the net.  With a non-FSK modem you can update the error every symbol.  With an FSK probably a continuous pilot would be better.

You could probably do that and remain in 2 kHz??

Steve

unread,
Dec 26, 2013, 12:53:50 PM12/26/13
to digita...@googlegroups.com
I forgot to say you're going to need a built-in doppler solution, as you might be in opposite aircraft going 600 knots in opposite directions, or going through a LEO Satellite going Mach 10 :-)

Matthew Pitts

unread,
Dec 26, 2013, 1:47:48 PM12/26/13
to digita...@googlegroups.com
Bruce,

To be honest, most hams that use DV on VHF and higher aren't looking to improve the number of transmissions that can use a given frequency; most of the complaints (other than the proprietary nature of the vocoder used) about D-STAR seem to be based on ignorance of how the protocol itself was designed. By this I mean that most of the complaints focus on how narrow the Low Speed Data subchannel was compared to what they felt would be more useful; this is evident by folks moving to DMR/NXDN and possibly the new Yaesu system, since those allow for voice and data simultaneously at equal rates. I do understand that you and others have a different view, and respect that; I just think that you will have a hard time convincing users to give up features like simultaneous voice and data to get multiple parallel transmissions through a common repeater.

Matthew Pitts
N8OHU



From: Bruce Perens <br...@perens.com>
To: digita...@googlegroups.com; John W. <kd8...@gmail.com>
Sent:
Subject: Re: [digitalvoice] FreeDV on 70cm

Bruce Perens

unread,
Dec 26, 2013, 2:32:01 PM12/26/13
to digita...@googlegroups.com, Matthew Pitts
Matthew,

What applications are they running on the data channel?

The radio we are building is an SDR based on the Whitebox design. It has a FLASH-based gate-array, rather than SRAM, so it's low-power-drain enough to operate as an HT. It's not a DDR so it can't operate the entire band at once, it uses an IQ modulator chip. But it can probably do all of DMR, D-STAR, and something based on Codec2, especially now that we have more information on the AMBE codec. It's an open platform, so we don't have to develop every mode ourselves.

So, you can have a wide enough data channel when you need it. But unlike those other modes, you can turn it off when you don't need it.

I don't really care for hard-coding modes into radios, and I think nobody else will once they have this.

Thanks

Bruce

Matthew Pitts

unread,
Dec 26, 2013, 3:01:02 PM12/26/13
to digita...@googlegroups.com
Bruce,

I'm not sure what is out there for use on DMR, but for over the air D-STAR use there is D-RATS and a few other applications that are designed for the low speed data channel; they also use it for D-PRS (a variant of APRS).

Matthew Pitts
N8OHU


From: Bruce Perens <br...@perens.com>
To: digita...@googlegroups.com; Matthew Pitts <daywalker_...@yahoo.com>

Steve

unread,
Dec 27, 2013, 7:08:09 AM12/27/13
to digita...@googlegroups.com
I was thinking, if you used the 64 bit FEC codec2 modes at 9600 bps, the voice would only use about 17% of the bandwidth (1600/9600), That seems like a lot of data to play with/turn into a protocol. 33% at 4800. Dstars 75% voice does seem a bit high (I haven't looked at the specs, but I seem to recall 3600/4800). Then again, maybe all you really need are the 150 bytes of the 25% left over.

James Hall

unread,
Dec 27, 2013, 9:01:13 AM12/27/13
to digita...@googlegroups.com
The protocol should be smart enough to realize that voice IS data.


On Fri, Dec 27, 2013 at 7:08 AM, Steve <coupay...@gmail.com> wrote:
I was thinking, if you used the 64 bit FEC codec2 modes at 9600 bps, the voice would only use about 17% of the bandwidth (1600/9600), That seems like a lot of data to play with/turn into a protocol. 33% at 4800. Dstars 75% voice does seem a bit high (I haven't looked at the specs, but I seem to recall 3600/4800). Then again, maybe all you really need are the 150 bytes of the 25% left over.

Remco Post

unread,
Dec 27, 2013, 9:19:18 AM12/27/13
to digita...@googlegroups.com
Op 27 dec. 2013, om 13:08 heeft Steve <coupay...@gmail.com> het volgende geschreven:

I was thinking, if you used the 64 bit FEC codec2 modes at 9600 bps, the voice would only use about 17% of the bandwidth (1600/9600), That seems like a lot of data to play with/turn into a protocol. 33% at 4800. Dstars 75% voice does seem a bit high (I haven't looked at the specs, but I seem to recall 3600/4800). Then again, maybe all you really need are the 150 bytes of the 25% left over.


having a GMSK FreeDV would provide two benefits:

- usable with FM only transceivers
- improved range over FM
(-more channels than FM in the same total bandwidth)

With 9k6 you’d loose the second. And really… we’ve had 9k6 packet radio for about 20 years now and everybody is not using that (any more), so why bother with high data rates? Having a bit more than the current 25 bps (in 1600 bps FreeDV) could be nice, but I don’t see the use for 8kbps data bandwidth


--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.

Bruce Perens

unread,
Dec 27, 2013, 1:09:14 PM12/27/13
to digita...@googlegroups.com, Remco Post
Given an open SDR platform, you have the potential to add carriers when you need them and turn them off otherwise. Our current FDMDV-derived soft-modem is parameterized to do that. So, you can add one or more data carriers as needed.

One thing I would like to do with the current HF slow data implementation is add a bit more protocol around the callsign, so that it's error corrected and so that a program can recognize it. But if we have something to do with more data, adding carriers works.

Bill Vodall

unread,
Dec 27, 2013, 1:14:00 PM12/27/13
to digita...@googlegroups.com
On Fri, Dec 27, 2013 at 6:19 AM, Remco Post <remco...@gmail.com> wrote:
>

>
> we’ve had 9k6 packet radio for
> about 20 years now and everybody is not using that (any more), so why bother
> with high data rates? Having a bit more than the current 25 bps (in 1600 bps
> FreeDV) could be nice, but I don’t see the use for 8kbps data bandwidth

9k6 is just a good start. There's finally a 9k6 data radio readily
available (Kenwood D710) and two more should show up in 2014. The
Argent Data T3-9670 hopefully in January and the UDRX-440 a little
later. Add to that the advances in faster soundcard packet by
DireWolf and UZ7HO... This is going to open up ham applications
like never before. Hopefully ICOM and others flip their data bit to
enable 4800 and join the party... The vendor that integrates data
and open source (android, linux, etc) in a single box is going to rock
the ham world.

Bill, WA7NWP

Mooneer Salem

unread,
Dec 27, 2013, 1:41:12 PM12/27/13
to digita...@googlegroups.com
The ship has sailed for traditional packet, unfortunately. The people who want faster data all use HSMM at 2.4GHz and above these days. I wouldn't be opposed to having extra data available to improve the quality of digital voice though.

-Mooneer K6AQ


Kristoff Bonne

unread,
Dec 28, 2013, 3:00:39 PM12/28/13
to digita...@googlegroups.com
James,


The problem with this approach is that this requires that your transport layer (OSI layer 2) should have knowledge of the upper layers and this is not concidered to be the best idea.

How is the protocol going to react to new versions of a voice format it does not know about (David has clearly stated that codec2 is still to be concidered "in development" and that the format still can change in the future), ...  or to encrypted voice, or some unknown "private" type-of-data?

Also, this is very error-prune. The "type of data" information is very important because it determines the format used to encode the data. If you get it wrong, the data that will be extracted from the received stream is going to be completely wrong!

Anycase, this is going to be a hell to code this. I leave it to you to program this. :-)



In c2gmsk, I tried to implement a small trick: the syncronisation information (that is present in the stream anyway) doubles as "type-of-data" information.

In contrast to D-STAR, the syncronisation pattern is not a fixed value but is a table of 16 possible pattern. And the choice of syncronisation pattern in the stream signals the type-of-data information for the next frame or next number of frames.

The patterns are 24 bit and have a minimal distance of 8 (i.e. they differ at least 8 bits from every other pattern), so this means we can have up to 8 bits wrong out of the 24 bits received but the "type-of-data" information will still be correct.

I chose 16 possible patterns so we can encode up to 16 types of data.

Possiblilites we thought about:
c2 voice @ 1200 bps
c2 voice @ 1400 bps
c2 voice @ 2400 bps
encrypted voice
"full data" @ 2400 bps, current voice frame is same as previous voice frame
"full data" @ 2400 bps, current voice frame is silence
end-of-stream marker
"private"
"experimental"

So 16 possible types-of-data should be enough for the first couple of years.


The FEC-system implemented in the minimal distance of 8 is also interesting as it provides information about the BER (bit-error-rate) of the received stream.




BTW. Every pattern also has a minimal distance with itself shifted one or two bits to the left or to the right, but this is what you would expect from a syncronisation pattern.



Creating a format to carry digital voice concidering a perfect radio-path is not that difficult. The difficult part is finding a system that is robust enough to deal with transmission-errors. :-)


73
kristoff - ON1ARF

Steve

unread,
Dec 28, 2013, 11:48:38 PM12/28/13
to digita...@googlegroups.com
I was playing on my napkin at the Satellite Lounge this evening, and after
two glasses of the house wine and a big heap of boeuf aux pommes de terre...

Assume a 2400 baud system, and a 25 baud digital voice.
The 2400 baud is 4-bits per symbol (9600 bps), and the 25 baud is 64 bits per symbol (1600 bps).
2400/25 = 96 baud frames (96 * 4 bits, 384 bits) 48 bytes per frame
frame = 64 bits voice (8 bytes) + 320 bits (40 bytes) data 25 times a second.

voice = 25 * 8 bytes voice frames a second (1600 bps)
data = 25 * 20 bytes data frames a second (4000 bps)
overhead = 25 * 20 byte overhead frames a second (4000 bps)

|8 bytes voice|40 byte data|

Move the bytes around...

|16 bytes|8 bytes|20 bytes|4 bytes|

16 byte Header

|3 byte sync|1 byte version|6 byte to callsign|6 byte from callsign|

32 byte Tail

|8 byte voice packet|20 byte data packet|4 byte frame CRC|

Callsign = 6 bit ASCII characters and ASCII numbers, AA9AAAA, 6 bit SSID (00-64), 6 bytes
version number 4-bits frame protocol and 4 bits codec protocol

If the voice frame is encrypted, the data frame will have more information
If a repeater is desired, the data frame will have repeater information

Film at 11...

Bruce Perens

unread,
Dec 29, 2013, 3:55:30 PM12/29/13
to digita...@googlegroups.com, Steve
1600 BPS voice and 4000 BPS overhead. What's wrong with this picture? :-)
The alternative I am working on is to keep it low-rate and go connection-based so that most packets don't have any overhead but an 8-bit connection ID.

Kristoff Bonne

unread,
Dec 29, 2013, 4:11:22 PM12/29/13
to digita...@googlegroups.com
Bruce,


Don't forget syncronisation and FEC!

Also, when we already discussed 1800 gmsk before, the question popped up about the frequency-responds of FM radio-path between 0 and 900 Hz when using the 9k6 dataports of a radio.

Have you already tested this?


73
kristoff - ON1ARF
Sent from my Android phone with K-9 Mail. Please excuse my brevity. --

Bruce Perens

unread,
Dec 29, 2013, 4:32:56 PM12/29/13
to digita...@googlegroups.com
On 12/29/2013 01:11 PM, Kristoff Bonne wrote:
Bruce,


Don't forget syncronisation and FEC!
And error-correction. I want to be able to synchronize when the receiver doesn't hear the entire transmission, as when a car drives through a hole in the RF coverage of its repeater or relay, so the intent is to intersperse a sentinel symbol and the connection ID throughout the transmission. Not big long syncrhonization sequences at the start of the packet.

Also, when we already discussed 1800 gmsk before, the question popped up about the frequency-responds of FM radio-path between 0 and 900 Hz when using the 9k6 dataports of a radio.
I actually don't plan to fit this to an existing FM radio, although I imagine that existing radios could be modified to provide the required low-frequency response if they don't already have it. My initial target is Whitebox.

    Thanks

    Bruce

Steve

unread,
Dec 29, 2013, 4:54:36 PM12/29/13
to digita...@googlegroups.com
After a nap, I think the header every frame is a bit of a waste.  I think I was thinking of a repeater where it could send frames in any order from any source, but no such repeater would ever exist :-)  The hazards of a one hour doodle.

I was also thinking of something with low speed voice but with a lot of data. I suspect voice and data might just be better kept separate, except maybe small data such as callsign, etc.

The thing about a low bit-rate codec, is it doesn't do anything to send it faster is what I was thinking.

With a 4800 FSK modem, you might as well send 3600 bit fec/codec frames.  If you do send 1600 bps on a 4800 modem, you might as well fill it out with data, if you can find that much to send.

Kristoff Bonne

unread,
Dec 29, 2013, 5:55:41 PM12/29/13
to digita...@googlegroups.com
Bruce,




On 29-12-13 22:32, Bruce Perens wrote:
On 12/29/2013 01:11 PM, Kristoff Bonne wrote:
Bruce,


Don't forget syncronisation and FEC!
And error-correction. I want to be able to synchronize when the receiver doesn't hear the entire transmission, as when a car drives through a hole in the RF coverage of its repeater or relay, so the intent is to intersperse a sentinel symbol and the connection ID throughout the transmission. Not big long syncrhonization sequences at the start of the packet.
Hmm. Aren't we mixing up two things here?

The reasopn you need syncronisation-patterns is -as its name implies- to resyncronise your stream when it drops out for a long time.

I don't know what you mean with "connectionID", but if it is a pointer to a header, it doesn't make much sence to spread it out all over the header. If the receiver (or repeater) has not received a header, it has no idea how to process the stream anyway, with or without a pointer to it. (unless connectionID contains all the information that was present in the header).



All layer-3 information of a stream is put in front of the stream for the simple reason that the repeater needs this information first so it can determine how to process it.

So, if you want a receiver to be able to pick up a QSO in the middle of a stream, the only option is to repeat the header inside the stream. That is what i-com has done on their extension to the D-STAR protocol.
If a receiver/repeater does not have this information, is has no idea if this is a simplex QSO, is it a "repeater" QSO, a "call-sign routed" QSO, is it a control-command, is it voice or data, encrypted or clear, ....

What exactly is in the connectionID?




BTW. Concering syncronisation-patterns.
If you say you want to interspurse your syncronisation pattern, I hope you do not mean you want to reduce the actual number of bits used for syncronisation.


The more you spread it out your sync-pattern in time, the longer it takes for the receiver to receive a complete and 100 % positively identified sync-pattern and are 100 % it is not a false positive neither. (i.e. detect two consequative possitive matches).


Remember that when your signal drops out, you can assume that the BER of the received stream is really going to suck. So do not expect your syncronisation-pattern to be anywhere near correct! The more you reduce the size of the sync-pattern, the less it will be able to to deal with transmission-errors. This makes spreading it out in time even worse as you then need multiple positive matches of consequative sync-patterns!



In c2gmsk, I just made a few assumption on how to deal with biterrors in the syncronisation-pattern, but that is only in a senario where the received stream is shifted 1 or 2 bits to the left or to the right.

I have not even dared to image how I would need to process picking up a stream in the middle of a QSO, unless it comes in 100 % correct. Just listen to a to a D-STAR QSO that is on the end of the coverage-area of a repeater, you can hear that the repeaters have quite a bit of issues to resyncronise; and that is with a syncronisation-pattern of 24 bits and a protocol that has been in use and tested for years by a company that makes digital radios for a living and has the expertise in this.


The syncronisation-channel should be concidered a "single point of failure". The last thing you want to do is reduce its robustness! :-)

Just look at the specs of D-STAR, DMR, dPMR, P25, NXDN how they do syncronisation!!!





Also, when we already discussed 1800 gmsk before, the question popped up about the frequency-responds of FM radio-path between 0 and 900 Hz when using the 9k6 dataports of a radio.
I actually don't plan to fit this to an existing FM radio,
I kind of feared that. :-(


although I imagine that existing radios could be modified to provide the required low-frequency response if they don't already have it. ...
Hmm. You *image*
:-(




My initial target is Whitebox.
So you are desiging a protocol that can only be used if the user buys a new radio? If you allow me to play "the devil's advocate" in this, what makes this different from another D-STAR, DMR, dPMR, or whatever?


Anycase, I can now buy a complete chinese dPMR radio for less then 100 dollar, including charging-stations, and everyhing else. Will you be able to match this with the whitebox?



    Thanks
    Bruce
73
kristoff - ON1ARF

Mel Whitten

unread,
Dec 29, 2013, 6:05:06 PM12/29/13
to digita...@googlegroups.com
----- Original Message -----
From: Steve
Sent: Sunday, December 29, 2013 3:54 PM
Subject: Re: [digitalvoice] FreeDV on 70cm

After a nap, I think the header every frame is a bit of a waste.  I think I was thinking of a repeater where it could send frames in any order from any source, but no such repeater would ever exist :-)  The hazards of a one hour doodle.

I was also thinking of something with low speed voice but with a lot of data. I suspect voice and data might just be better kept separate, except maybe small data such as callsign, etc.
       Experience has found, "call sign" data is just, if not more important than the voice itself in weak signal conditions.. so yes, small data for call sign is essential.
Mel
 

The thing about a low bit-rate codec, is it doesn't do anything to send it faster is what I was thinking.

With a 4800 FSK modem, you might as well send 3600 bit fec/codec frames.  If you do send 1600 bps on a 4800 modem, you might as well fill it out with data, if you can find that much to send.


On Sunday, December 29, 2013 2:55:30 PM UTC-6, Bruce Perens wrote:
1600 BPS voice and 4000 BPS overhead. What's wrong with this picture? :-)
The alternative I am working on is to keep it low-rate and go connection-based so that most packets don't have any overhead but an 8-bit connection ID.

Matthew Pitts

unread,
Dec 29, 2013, 6:48:33 PM12/29/13
to digita...@googlegroups.com
Kristoff,

I suspect that the reason that the JARL designed the D-STAR protocol the way they did was for precisely the reasons you outline; I don't really know if Icom made any major alterations to it, beyond not properly supporting some of the data flags in the protocol description, but that really isn't relevant to this discussion. I don't really think a lot of us view it this way, but D-STAR has a relatively simplistic Common Air Interface compared to DMR and NXDN, and that interface is also pretty much all there is to the network interface, whereas the other systems mentioned have extremely complex network interfaces, and in the case of DMR, a lot of the added complexity of the CAI is due to the need to syncronise the subscriber radio with the repeater time slots. This is one reason why, despite its problems, I feel the D-STAR protocol design is the best choice for what we want to accomplish. And besides, we already have plenty of working code to base any networking we might eventually want to do on.

Matthew Pitts
N8OHU



From: Kristoff Bonne <kris...@skypro.be>
To: digita...@googlegroups.com
Sent: Sunday, December 29, 2013 5:55 PM

Subject: Re: [digitalvoice] FreeDV on 70cm

Kristoff Bonne

unread,
Dec 29, 2013, 6:54:44 PM12/29/13
to digita...@googlegroups.com
Steve,



Voice is fundamentally different from data:

A "media"-stream (voice, video, ...)  is by default real-time (which means that the stream may never stop) but you can have bit-errors (which may or may not be corrected by the FEC layer). Voice is fundamentally one-way.

A data-stream has no idea of the semantic of what it is carrying, so it has to be 100 % correct. But it does not have to be "real-time".
This means that errors are "corrected" by retransmission, which requires two-way communication.
The only exception to this is information that may contain errors.


In most cases, data-communication can be divided in two groups:
- bulk-transfer of large amounts of data.
- slow-speed transfer of data that can be added as a "auxilairy data" to a voice-channel.

For me, it is pretty simple:
- For bulk-transfer, just design a protocol that only does data at a maximum bitrate. Use quick RX/TX turnaround times and small packet-sizes.  If you use FEC, use something like RS.
- For slow-speed data, just try to mix the data inside the voice-stream but limit it to low bitrates. Either (say) very little information that is accepted to be wrong (like a callsign, if not used for routing or OSI layer 3), or try to squeze it in the voice-stream when voice-information can be omitted (i.e. replaced by silence, or repeating the previous voice-frame).



73
kristoff - ON1ARF

Bruce Perens

unread,
Dec 29, 2013, 7:40:41 PM12/29/13
to digita...@googlegroups.com, Kristoff Bonne
Kristoff,

I have an archived email message explaining this in more detail, I'll dig around for it.

In the connection based scheme, a mobile can connect to a relay. A relay is a simplex station that connects you to something else, a repeater is a special case of relay that does simultaneous transmission and reception and implements a multicast.

Mobiles connect to a relay before voice or data is transmitted. At this time, they exchange all callsign and routing information and get back an 8-bit connection ID.

All subsequent communication between the mobile and the relay is identified by the connection ID.

Obviously there would be a different protocols for simplex.

It's easy for us to become too datagram oriented, coming from the Internet.

Thanks

Bruce

Remco Post

unread,
Dec 29, 2013, 8:09:53 PM12/29/13
to digita...@googlegroups.com
I’d like to offer a different perspective.

D-Star offers all kind of services like call-sign routing and reflectors. This is great, I don’t use it, but I think I grasp the extend of the services provided and their benefits.

I like FreeDV for it’s simplicity. No headers, no services, just voice transmission. I can see that a GMSK FreeDV (with some added sync patterns) would be more suitable for VHF/UHF usage than the 16*QPSK modem used for HF.

I don’t see the use in replicating everything that D-Star does well. If I wanted to use those services, I’d do so. My IC-7100 is perfectly capable of doing so.

What I’d like in a GMSK FreeDV is the improved performance over D-Star in terms of range and audio quality, not providing the same services with just a different codec….

--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.

Bruce Perens

unread,
Dec 29, 2013, 8:21:08 PM12/29/13
to digita...@googlegroups.com
On 12/29/2013 05:09 PM, Remco Post wrote:
What I�d like in a GMSK FreeDV is the improved performance over D-Star in terms of range and audio quality, not providing the same services with just a different codec�.
And more channel capacity out of the band.

I agree that the potential to have a significantly narrower operation than today is going to end up being a feature. I think you can have lots of features without paying for them in overhead. We just have to be smarter than the current datagram and side-channel schemes. Having HTs that are SDRs and can support apps opens a lot of this up for experimentation.

��� Thanks

��� Bruce

John D. Hays

unread,
Dec 29, 2013, 8:33:39 PM12/29/13
to digita...@googlegroups.com
Over 95% of our repeater channel capacity sit silent -- the narrow bandwidth argument is not the winner.  (In the US we haven't even moved to 12.5 Khz channels for narrow FM like Europeans.)

Codec-2 has its benefits and on HF, the narrow bandwidth has strong value on a crowded band, not so much on repeaters.



John D. Hays
K7VE
PO Box 1223, Edmonds, WA 98020-1223 
  


On Sun, Dec 29, 2013 at 5:21 PM, Bruce Perens <br...@perens.com> wrote:
On 12/29/2013 05:09 PM, Remco Post wrote:
What I’d like in a GMSK FreeDV is the improved performance over D-Star in terms of range and audio quality, not providing the same services with just a different codec….
And more channel capacity out of the band.

I agree that the potential to have a significantly narrower operation than today is going to end up being a feature. I think you can have lots of features without paying for them in overhead. We just have to be smarter than the current datagram and side-channel schemes. Having HTs that are SDRs and can support apps opens a lot of this up for experimentation.

    Thanks

    Bruce

Steve

unread,
Dec 29, 2013, 11:16:27 PM12/29/13
to digita...@googlegroups.com
The term auxiliary channel, as applied to data sent along with voice, may be backwards these days. I was reading an article where teens are embracing texting over other forms of social networking. Second, is instant broadcast of snapshots. Teens use the voice part of their phones, only as a last resort :-)

Steve

unread,
Dec 29, 2013, 11:32:45 PM12/29/13
to digita...@googlegroups.com
Especially if it sounds like a robot.

On Sunday, December 29, 2013 7:33:39 PM UTC-6, K7VE wrote:
...the narrow bandwidth argument is not the winner.

Bruce Perens

unread,
Dec 30, 2013, 2:04:12 AM12/30/13
to digita...@googlegroups.com, John D. Hays
Try coordinating a new digital repeater in a metropolitan area, and you'll change your mind. The last time I tried that was before we lost 440 to PAVE PAWS. And I suspect the municipal and land-mobile folks would be happy to have more channels, too.

John D. Hays

unread,
Dec 30, 2013, 2:30:09 AM12/30/13
to digita...@googlegroups.com
Bruce,

Coordination is another issue -- a plan to refarm the 2 meter and 70cm band plans here in the Seattle area gained no traction, we finally put some slots in 146.4-146.5 paired with 147.4-147.5 at 12.5 khz. (and 2 6.25 khz. splinters) and segments of 440 can now go 12.5 kHz.  That's politics, not technology, but the fact remains of the used pairs, they are mostly quiet (some just talking clocks).  You can go as narrow as you want, but under the current deferral to local coordinating bodies the band plans are not going to change so we are stuck with 15 and 20 kHz channels.  Pave Paws doesn't eliminate all 440, just need to move from high altitude to low altitude repeaters.




John D. Hays
K7VE
PO Box 1223, Edmonds, WA 98020-1223 
  


Remco Post

unread,
Dec 30, 2013, 5:29:10 AM12/30/13
to digita...@googlegroups.com
Op 30 dec. 2013, om 02:33 heeft John D. Hays <jo...@hays.org> het volgende geschreven:

Over 95% of our repeater channel capacity sit silent -- the narrow bandwidth argument is not the winner.  (In the US we haven't even moved to 12.5 Khz channels for narrow FM like Europeans.)

in PA country it would be impossible to get a new (FM) repeater on 2m without another repeater having to be taken down. Same for 70cm. This is partly due to the way the ‘Agentschap Telecom’ (Dutch FCC) grants permits for repeaters: they require at least 20km distance between repeaters. There is no room for ‘digipeaters’ in the 2m band. In the 70cm band they created room for digipeaters (D-Star and DMR) by implementing those in the ‘German’ 7.6MHz shift system while we have the FM repeaters in a less common 1.6MHz shift system using different input and output frequencies.

I guess that making more efficient use of the bandwidth only helps if Agentschap Telecom also changes it’s policy regarding distances between repeaters. As for the half-duplex channels, those are mostly quiet, so having more of those is useless...

Kristoff Bonne

unread,
Dec 30, 2013, 6:54:06 AM12/30/13
to digita...@googlegroups.com
Bruce,

I'm a bit confused.


This means that if a receiver needs to resync to a stream, it now will need to resync to both the sync-pattern and the connectionID.
Why would that be an advantage? It only makes things more complex.


The only advantage of the connectID system I see is that it reduces the need for a header for every stream, but that's just a solution for a problem you created yourself. This goes at the cost of creating a control-protocol that is otherwize not needed.

Or am I missing something here?


73
kristoff - ON1ARF

Kristoff Bonne

unread,
Dec 30, 2013, 6:59:36 AM12/30/13
to digita...@googlegroups.com
Steve,
You are correct. It would be good to use the same terminology as much as
possible to keep the discussion as simple as possible.


I used "auxiliary data" as this is the term used in some other
technologies (like DAB, Digital Audio Broadcasting) for transmitting a
data-stream incapsulated inside an audio-stream. This is used for data
that has a "link" to the audio-channel being broadcast (say, the name of
the song, an image of the album art, ...)



Anycase, if you want to implement texting on ham-radio, use the APRS
message service. There is no need to reinvent the wheel every time if
good solutions do already exist.


73
kristoff - ON1ARF

Kristoff Bonne

unread,
Dec 30, 2013, 7:18:18 AM12/30/13
to digita...@googlegroups.com
Remco,


I see your point, but if you just want "dumb" simplex digital voice, just get yourself a cheap dPMR radio for less then 100 dollar. There is no need to create a new protocol for that.


The advantage of FreeDV is that is it HF so it works big distance. VHF/UHF are much more limited in range so you need some "relaying" infrastructure much quicker. For that, you need OSI layer-3 information, i.e. a header.

The proposal for a header for c2gmsk -at some point- was to create different types of header: a "minimal" header for simplex (local simplex, 10 meter / VHF DX, NVIS), a "normal" header for repeater-operations and an "extended" header if you need to add information for regulatory reasons.

In essence, a header does not really "cost" that much. It just adds a delay of a couple of hunderd ms at the beginning of a stream. Not the end of the world, if you ask me. :-)



BTW. For me, the advantage of designing our own DV system is simply to have a system that allows us to experiment. Say you are interested in (say) 4 meter DXing from PA to G, you might be interested in modifying the protocol to suit these radio-conditions better. The same thing if you are interested in 80/60/40 meter NVIS operations, or satellite, or whatever.


There are now more then sufficient "ready-to-use" VHF/UHF digital voice protocols out there. That is not the issue and there is indeed no reason whatsoever to just replicate these services, just for the sake of using a new codec.

The "problem" with the current systems is that� none of them where created with "experimentation" in mind. And that is -in my mind- what ham-radio is about.
That is the hole codec2-based GMSK can fill.



73
kristoff - ON1ARF






On 30-12-13 02:09, Remco Post wrote:
I�d like to offer a different perspective.

D-Star offers all kind of services like call-sign routing and reflectors. This is great, I don�t use it, but I think I grasp the extend of the services provided and their benefits.

I like FreeDV for it�s simplicity. No headers, no services, just voice transmission. I can see that a GMSK FreeDV (with some added sync patterns) would be more suitable for VHF/UHF usage than the 16*QPSK modem used for HF.

I don�t see the use in replicating everything that D-Star does well. If I wanted to use those services, I�d do so. My IC-7100 is perfectly capable of doing so.

What I�d like in a GMSK FreeDV is the improved performance over D-Star in terms of range and audio quality, not providing the same services with just a different codec�.

Kristoff Bonne <kris...@skypro.be> wrote:
��� Thanks
��� Bruce
73
kristoff - ON1ARF


--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.
To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.

��

73 de Remco Post, PE1PIP

Kristoff Bonne

unread,
Dec 30, 2013, 8:38:56 AM12/30/13
to digita...@googlegroups.com
Matthew,


Concering the i-com extension:

What they do is that, if there is no other data being send in a D-STAR stream, they fill up that bandwidth by continuesly sending a copy of the header. As that space would otherwize be unused, that is -I guess- a sensible thing to do.

This was not forseen in the original D-STAR specifications, and I think they even invented their own "type-of-data" identifier for it. (all data send in the slow-data channel of D-STAR is encapsulated in frames and every frame begins with a "type-of-data" identifier).




I agree. D-STAR was created to do what is was supposed to do and -let's be honest- it does it pretty good.




By starting to write code for c2gmsk and by asking the many "what if" questions that come with it, it has been a very good learning-school. It really helps to appriciate why digital voice protocols as they are and why the design-choices made by the people who created D-STAR are as they are.
And I must say that most of them do make a lot of sence.

Only, for me, I wanted to have something that allows people not only to *use* digital voice, but also to learn how it works and be able to adapt it; hence c2gmsk.



It is very easy to say "that protocol sucks because it lacks <this> or <that> and <this-option> should have been added too".
But when you get into the nitty-gritty details of actually writing code, it does become clear that some "minor" feature might have much bigger impact then expected.

I would advice everybody who is interested in the internal working of digital voice to write some code to implement some feature and really ask the "what if" questions. Or do some simulations using GNUradio, gnu octave, a radio-channel simulation tool, or whatever and really look at what transmission-errors really do to your code or protocol! You will be very surprised; but ... it really helps to learn!




The thing about digital voice is a very simple fact but with big concequences: digital voice is REAL TIME. This means that the modem and the protocol must be able to deal with TRANSMISSION ERRORS!
EVERY part of the protocol has to be made robust enough to deal with errors! Not only voice, everything! Markers in the stream to keep in syncronised, "type of data" indentification tags, "versionid" information of the protocol, you name it. etc. Everything!


This not only adds complexity to the protocol, the implementation code (and therefor the change of bugs), it also greatly increases the bitrate you need.

Reducing the bitrate of the modem really eats in the ability to deal with errors!
A 2400 bps raw-modem bitrate used in combination with on 1400 bps codec2 does not give you a lot of place to provide both voice FEC and all the additional bits needed for the protocol. In fact, in c2gmsk, syncronisation-information is added at the cost of FEC data for the voice. The codec2 voice in a frame that has syncronisation-information has less FEC bits then in a frame that does not have syncronisation.




I've seen quite a few discussions on this list on some "brand new and exiting new protocols/feature we can create", the "ham radio is about inventing new things" idea.

My principle has always been "first learn to walk, then try to run".

So far, for codec2-based gsmk, we haven't even started to crawl!!!




73
kristoff - ON1ARF

Steve

unread,
Dec 30, 2013, 12:38:56 PM12/30/13
to digita...@googlegroups.com
On Monday, December 30, 2013 5:54:06 AM UTC-6, kristoff wrote:

...It only makes things more complex.
...a solution for a problem you created yourself.
...the cost of creating a control-protocol that is otherwise not needed.


Or am I missing something here?


I think connection protocols are square and datagram  protocols are round. Thus, if all you have is round holes, you see connections as a "problem", "complex", and "otherwise not needed" :-)

Obviously, this isn't going to rise to the level of connection oriented level-2 for data (and the protocol wars that went through), but I can see where some management of network access could include connections versus just listening for broadcast datagrams.  Neither is going to work well at the fringes. If you are at the fringe, the net can close your connection. The broadcast protocol keeps spraying useless packets.

Steve

unread,
Dec 30, 2013, 12:50:09 PM12/30/13
to digita...@googlegroups.com
On Monday, December 30, 2013 5:59:36 AM UTC-6, kristoff wrote:

Anycase, if you want to implement texting on ham-radio, use the APRS
message service.


For what its worth, I don't want to implement texting.  It was more an observation of the movement of society, and not a technical offering.

Kristoff Bonne

unread,
Dec 30, 2013, 3:49:45 PM12/30/13
to digita...@googlegroups.com
Hi Steve,
I did already had that impression.  :-)

Concerning APRS, one big advantage is that it is the kind of technology that most people can still build themselfs. I have here a chipkit uno (a PIC32 based clone of the arduino uno) that has been laying here for more then a year for a project that is suitable for it. This would be ideal for it. I guess an arduino duo should also be able to do the trick.

It would be a nice exercise in DSP. :-)



73
kristoff - ON1ARF

Matthew Pitts

unread,
Dec 30, 2013, 4:21:07 PM12/30/13
to digita...@googlegroups.com
The main problem I see in this approach is that the power versus bandwidth tradeoff has it's own set of issues. Greater range with less power used is good in theory, but at some point we have to consider the potential for interference with far away stations. This is one of the current complaints about wide bandwidth digital on HF; narrow bandwidth stations complain about interference from them when there are too many stations competing for a given slice of spectrum and I'm sure VHF/UHF repeater owners and simplex users would complain very loudly if their radios started putting out "digital noise" that they can't track down or decode without special hardware or software. Experimentation is fine, of course, but for something that would truly be useful, we need to consider how said experimental system could potentially interact with "incompatible" system hardware.

Matthew Pitts
N8OHU


From: Remco Post <remco...@gmail.com>
To: digita...@googlegroups.com
Sent: Sunday, December 29, 2013 8:09 PM

Subject: Re: [digitalvoice] FreeDV on 70cm

Bruce Perens

unread,
Dec 30, 2013, 6:02:09 PM12/30/13
to digita...@googlegroups.com
On 12/30/2013 01:21 PM, Matthew Pitts wrote:
The main problem I see in this approach is that the power versus bandwidth tradeoff has it's own set of issues. Greater range with less power used is good in theory, but at some point we have to consider the potential for interference with far away stations.
:-)

In general, it's not worthwhile to plan for an embarrassment of riches until we're sure we have one. Given the fact that we routinely host mobiles with 50 Watts all over 2M and 440, I'm not going to worry too much about a potential 7 dB improvement on a handheld. In the case of mobiles and bases, etc., it may just be that we decide that 10W is enough for normal operations. We do have the capability to tell the other station their RSSI information and this might be useful for power management. Everybody wants the battery to last longer.

Thanks

Bruce

Matthew Pitts

unread,
Dec 30, 2013, 9:29:15 PM12/30/13
to digita...@googlegroups.com
Bruce,

You're giving me ideas that I will need to flesh out... :-)

Matthew Pitts
N8OHU
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Bruce Perens

unread,
Dec 31, 2013, 4:20:54 AM12/31/13
to digita...@googlegroups.com, Kristoff Bonne
Kristoff,

It might be easier to apprehend if you think of it as stateful vs. stateless. Consider a vehicle drives into an RF "hole" and out again. We resynchronize and send all of the routing information without assuming that the relay knows any of it, or we resynchronize and send a very short reminder of the state that the relay already knows.

So, what I'd like is to have fast resynchronization and to have the connection ID sent every 100 to 200 milliseconds. It's nice how FreeDV resynchronizes without any synch symbols at all. We can't be that good on GMSK, but if our bit scrambler is really simple we can resynchronize on short symbols.

Kristoff Bonne

unread,
Dec 31, 2013, 6:39:57 AM12/31/13
to digita...@googlegroups.com
Bruce,


(inline comments)



On 31-12-13 10:20, Bruce Perens wrote:
Kristoff,

It might be easier to apprehend if you think of it as stateful vs. stateless. Consider a vehicle drives into an RF "hole" and out again. We resynchronize and send all of the routing information without assuming that the relay knows any of it, or we resynchronize and send a very short reminder of the state that the relay already knows.

So, what I'd like is to have fast resynchronization and to have the connection ID sent every 100 to 200 milliseconds. It's nice how FreeDV resynchronizes without any synch symbols at all. We can't be that good on GMSK, but if our bit scrambler is really simple we can resynchronize on short symbols.
OK, let's do this step by step.


FreeDV is based on a modulation-sceme which uses multiple carriers, in fact, the number of carriers is directly linked to the number of bits in a layer-2 frame. So, every particular carrier corresponds to one particular bit in the frame.


GMSK is sequencial. Bits are sent one after the other.
So, when a bit enters the decoder, how does it know where in the frame it should place it?



73
kristoff - ON1ARF

Kristoff Bonne

unread,
Dec 31, 2013, 6:52:58 AM12/31/13
to digita...@googlegroups.com
Bruce,


(inline comments)
Two remarks:

- This assumes that the layer2 actually has access to OSI layer 1.

- Have you ever looked at the actual short-term RSSI values of incoming signal on a repeater? The DTMF/RCQ tool for D-STAR does allow that (based on the BER of the received D-STAR stream). You see that for people using handheld radios, the signal-level really varies quite a lot in very short time-spans.

The mobile phone protocols have had this feature ever since the GSM standard (IIRC), but mobile phones are full-duplex so allow this feature to operate in a much shorter feedback loop.
That's where it really works very well.



Of course, you can always add it to the protocol, that not the point.

Steve

unread,
Dec 31, 2013, 11:39:42 AM12/31/13
to digita...@googlegroups.com
One thing I noticed on dstar was a note that the ambe chip uses a convolutional 2/3 coder. 48 bit codec frames becoming 72 bit. This in comparison to the codec2 using a golay block code on only 12 of the 52 bits, becoming 64.  It's kind of confusing, but the codec bits don't seem to be scrambled, but everything else is scrambled (whitened) individually before being combined into the output. You'd think that the scrambling would have been saved for last. 

It also looks like the header has a different (rate 1/2) convolution.  They must have had a party after getting all this to work :-)

Kristoff Bonne

unread,
Dec 31, 2013, 12:44:02 PM12/31/13
to digita...@googlegroups.com
Steve,





On 31-12-13 17:39, Steve wrote:
> One thing I noticed on dstar was a note that the ambe chip uses a
> convolutional 2/3 coder. 48 bit codec frames becoming 72 bit. This in
> comparison to the codec2 using a golay block code on only 12 of the 52
> bits, becoming 64. It's kind of confusing, but the codec bits don't
> seem to be scrambled, but everything else is scrambled (whitened)
> individually before being combined into the output. You'd think that
> the scrambling would have been saved for last.
D-DSTAR uses golay-code on half of the frame, to that's how 48 bits
become 72 bits.

There was a message about it here in this list with the details. If I
remember correct, it was something like this: a AMBE-frame is actually
49 bits, not 48. So in one of the golay frames, 'bit24' is used to store
voice-data too.

I do not remember if in the 3 other golay blocks, the 24th bit is used
to store additional parity-data or if it is unused.


> It also looks like the header has a different (rate 1/2) convolution.
> They must have had a party after getting all this to work :-)
That is correct. The header uses standard 1/2 convolutional FEC.

As all information in the header is equaly important, that is the most
logical thing to do.

The reason that the system to protect only about half of the voice does
work is because in voice, some information is more "important" then
other (i.e. the audiable impact of it being received wrong varies on the
function of that bit).


For your reference, c2gmsk has per codec2 frame (96 bits, i.e. 40 ms @
2400 bps) either 1 golay blocks or 3 golay blocks, depending on wether
the frame also contains a 24 bit syncronisation/type-of-information
pattern or not.
- 56 bits codec2 + 36 bits golay + 4 bits auxilairy-data = 96 bits
- 56 bits codec2 + 12 bits golay + 24 bits syncronisation + 4 bits
auxilairy-data = 96 bits

For FDMDV FreeDV, there is indeed only one single golay-block.



73
Kristoff - ON1ARF

Reuven Z Gevaryahu

unread,
Dec 31, 2013, 2:01:04 PM12/31/13
to digita...@googlegroups.com
Some of that was an error in an earlier post of mine. The AMBE+2 codec (DMR; P25ph2, etc) uses a golay 24 and a golay 23, leaving an extra bit for voice. The AMBE+ codec used in D-STAR  does not- It uses 2 golay 24s I believe. 7(/6 in golay) bits Fundamental frequency, 6/4 bits gain/in golay, 9/7 PRBA24 Spectral, 7/5 PRBA58 Spectral, 4/2 HOC0 Spectral, 4/0 HOC1 Spectral, 4/0 HOC2 Spectral, 3/0 HOC3 spectral, and 4/0 Voicing. Yielding 48 bit voice data. (Now don't ask me what that all means- I can just read the code; there are others on the group who can explain the math.) 12+12 extra bits added for the 2 golays, yielding 72 bit frames. 24 bits added for "Slow data" or sync (every 21 frames) yields 96 bit frames.

I just counted these in the code- If anybody else is curious about the internal workings of AMBE+ (as used in D-STAR), the dsd/mbelib code can be consulted. It isn't the best written code, and certainly isn't authoritative, but it gives the idea of what is going on. I believe there are a few small fixes that haven't been merged yet too.

Parse the D-Star stream- https://github.com/szechyjs/dsd/blob/master/dstar.c (Kristoff has a more complete implementation)
Deinterleave the AMBE+ data blocks using this table: https://github.com/szechyjs/dsd/blob/master/dstar_const.h
Degolay, XOR with PRNG, Degolay, decode to the MBE parameters above like this: https://github.com/szechyjs/mbelib/blob/master/ambe3600x2400.c
Pass parameters to https://github.com/szechyjs/mbelib/blob/master/mbelib.c to be synthesized into speech.

Once I got past the fact that the math is above my head, the decoding seems straightforward. :-)

Now one thing that should interest people is how AMBE+2 differs from AMBE+. The math is the same, but the distribution of the bits and the golay protection changed. Of particular note, fewer of the fundamental bits and more of the voicing bits are protected. AMBE+2 has:
7(/4 in golay) bits Fundamental frequency, 5/4 voicing/in golay, 5/4 bits gain, 9/8 PRBA24 Spectral, 7/4 PRBA58 Spectral, 5/0 HOC0 Spectral, 4/0 HOC1 Spectral, 4/0 HOC2 Spectral, and 3/0 HOC3 spectral, yielding 49 bit voice data, 12+11 extra bits added for the 2 golays, yielding 72 bit frames.

See the AMBE+2 parameter decoding in https://github.com/szechyjs/mbelib/blob/master/ambe3600x2450.c if I counted wrong...


--Reuven (KB3EHW)

Matthew Pitts

unread,
Dec 31, 2013, 2:10:05 PM12/31/13
to digita...@googlegroups.com
Reuven,

Interesting; not as much different as I thought they would be. And this does provide some insight as to why DMR might possibly sound better than D-STAR, even using the same vocoder chip, since the AMBE 3000 ones support older implementations.

Matthew Pitts
N8OHU



From: Reuven Z Gevaryahu <reu...@alumni.upenn.edu>
To: digita...@googlegroups.com
Sent:
Subject: Re: [digitalvoice] Re: FreeDV on 70cm

Steve

unread,
Dec 31, 2013, 2:25:59 PM12/31/13
to digita...@googlegroups.com
I thought the FEC did a great job on the 1300 codec, when comparing the test audio samples. Just for comparison purposes, from the code comments:

Protect first 12 out of first 16 excitation bits with (23,12) Golay Code
4 voicing bits
4 MSB of 7 pitch bits
4 MSB 5 energy bits

I would think the voicing bits would be prime candidates for FEC, to keep most of the burps and beeps tamed...

Bruce Perens

unread,
Dec 31, 2013, 3:43:03 PM12/31/13
to digita...@googlegroups.com
On 12/31/2013 03:52 AM, Kristoff Bonne wrote:

> GMSK is sequencial. Bits are sent one after the other. So, when a bit
enters the decoder, how does it know where in the frame it should place it?

First, let's assume that we can PLL to the raw GMSK signal, and we
enforce that there is one correct phase. We aren't scrambling because we
maintain some minimum number of 0/1 transitions in our data. We
interleave 48 bits of overhead into 4 voice frames, for a 1:4
overhead-to-payload ratio. Within the 48 bits of overhead are a known
data pattern (which maintains our minimum number of 0/1 transitions) and
an error-corrected connection ID. We remember enough of the raw incoming
bits that after we find sync in this, we can play back the 4 frames of
audio data. Once we are synced, we can use timing to reconstruct the
sync pattern and the error-corrected connection identifier from incoming
data even if there are some bits missing. We just get a probability, and
if it's high enough we accept that we are still in sync, otherwise we go
back to seeking sync within a stream.
> - This [sending RSSI to the peer] assumes that the layer2 actually has
> access to OSI layer 1.
Yes. This is not the first time we're crossing layers, our error
correction has knowledge of the codec and our HF modem design has
knowledge of both. If we had bandwidth to throw away we could be neater
about the layering of codec and modem, but we'd still have to cross
layers to handle RSSI indication to the peer.
>
> - Have you ever looked at the actual short-term RSSI values of
> incoming signal on a repeater? The DTMF/RCQ tool for D-STAR does allow
> that (based on the BER of the received D-STAR stream). You see that
> for people using handheld radios, the signal-level really varies quite
> a lot in very short time-spans.
Yes. If you have simplex, we can send the lowest level between two locks
to the same transmission in the past minute or less. So you will hear if
you've dropped out. For full duplex we can probably do better.

Message has been deleted
Message has been deleted

Kristoff Bonne

unread,
Jan 2, 2014, 9:55:48 AM1/2/14
to digita...@googlegroups.com
Bruce,


On 31-12-13 21:43, Bruce Perens wrote:
> On 12/31/2013 03:52 AM, Kristoff Bonne wrote:
>
> > GMSK is sequencial. Bits are sent one after the other. So, when a
> bit enters the decoder, how does it know where in the frame it should
> place it?
>
> First, let's assume that we can PLL to the raw GMSK signal, and we
> enforce that there is one correct phase. We aren't scrambling because
> we maintain some minimum number of 0/1 transitions in our data. We
> interleave 48 bits of overhead into 4 voice frames, for a 1:4
> overhead-to-payload ratio. Within the 48 bits of overhead are a known
> data pattern (which maintains our minimum number of 0/1 transitions)
> and an error-corrected connection ID.
48 bits for both the scrambling-sequence and connectionID +
error-correction ???

Are you sure you have done the maths correctly? My calculation on a
napkin seams to indicate you'll end up with a resyncronisation-pattern
that is weaker than what now on D-STAR, because of additional
requirement of the connectionID.



> We remember enough of the raw incoming bits that after we find sync in
> this, ...
> we can play back the 4 frames of audio data. Once we are synced, we
> can use timing to reconstruct the sync pattern and the error-corrected
> connection identifier from incoming data even if there are some bits
> missing. We just get a probability, and if it's high enough we accept
> that we are still in sync, otherwise we go back to seeking sync within
> a stream.
Not really.
At the point when you do start to pick up a signal again, anything you
received before that isn't valid anyway. There isn't anything to go back to!
The only option is wait for more data to come in.

The thing is that is the less FEC you apply to the
syncronisation-pattern, the more difficult it is to recuperate it.
Fast-resync and a low bitrate for syncronisation contradict!


Somebody once told me "be carefull on what to spend your money, ... but
do not be afraid to spend it on things that are important in life
(health, security, happyness)". In DV, the syncronisation-pattern is one
of the things on which you do not cut. It a SPOF (single point of
Failure) of the complete system.



Anycase, if 48 bits is 1:4 for 4 voice-frames, you only have 1200 bps of
voice-data. Where is the voice FEC?


>> - This [sending RSSI to the peer] assumes that the layer2 actually
>> has access to OSI layer 1.
> Yes. This is not the first time we're crossing layers, our error
> correction has knowledge of the codec and our HF modem design has
> knowledge of both.
The problem with anything that crosses the layers is that it makes your
protocol complety unless for experimention.

What if somebody wants to implement a different voice-codec, or
encrypted voice, or raw data. Say somebody is interested in 10 meter /
VHF-low / VHF DXing and want to addapt the protocol for long-term fading.
If you start mixing layers, there isn't that much difference between
this protocol and any of the "buy-and-yse" protocols (D-STAR, DMR or
dPMR). You cannot addapt such a protocoll without possibility producing
a lot of R2D2 on other radios that might happen to pick up the signal.
BAD IDEA!!!


You advocate a new type of HT radio that would allow people to
"experiment, based on apps". But if you do not provide the underlaying
protocol to support that, the only "apps" you are going to have are
"low-bitrate voice", "low-bitrate voice", "low-bitrate voice" and ...
"low-bitrate voice".


If I want a cheap DV radio I just want to *use*, I'll get myself one of
these cheap Chinese sub-100 dollar dPMR radios. dPMR does allow me
create a new protocol and use it next on the same frequencies as the
default dPMR stack ... and it can be done on any normal FM-radio with a
9k6 audio port.



> If we had bandwidth to throw away we could be neater about the
> layering of codec and modem, but we'd still have to cross layers to
> handle RSSI indication to the peer.

Aren't you comparing apples and eggs? The situation for a HF modem is
completely different then for one for VHF/UHF.

On HF, bandwidth is scares because of the simple fact that there isn't
that much available to start with, and because an HF-signal carries over
thousands of kilometers. Trying to reduce bandwidth by intermixing the
different OSI layers is a usefull thing.

On VHF and UHF, that is not at all the case. The only reason there would
be "no bandwidht to throw away" is because of a requirement you created
yourself, a requirement mainly based on a local situation (if I am
correct) Southern California. Creating a new GMSK protocol is not about
providing a "it needs less bandwidth" argument as a selling-point for an
SDR-based radio.



There is no issue what so ever with bandwidth on VHF and above. As
already mentioned here, most of the repeater frequencies are silent. And
we have complete bands that are almost all unused (like the 6, 4 and 1.5
meter and 23 cm bands).
If you want to solve the issue of repeater-capacity in metropolian
areas, create a cheap 23 cm handheld and mobile-radio for FM. If needed,
cross-band link it to 1.5, 2, 4, 6 or 10 meter for long-distance
coverage to mobile or fixed stations.

The main issue with frequencies for repeaters is that everybody wants to
cram themselfs on the same two bands, simply because of the lack of
availability of cheap portable radios for the other bands.
Designing a cheap FM handheld radio for 23 cm might not be as
"high-tech" or "sexy" as designing a complete new DV protocol or
"building the radio of the future: a SDR radio with touchscreen and
apps", but it -if creating repeater-frequencies for highly populated
areas is your goal- it would surely be a lot more helpfull.


The reason FreeDV does so well, is because it works very well ... and
because it does not requires the people to buy a new radio.

KEEP IT SIMPLE!




73
kristoff - ON1ARF

Bruce Perens

unread,
Jan 2, 2014, 1:09:05 PM1/2/14
to digita...@googlegroups.com, Kristoff Bonne


Kristoff Bonne <kris...@skypro.be> wrote:
>Bruce,
>
>
>On 31-12-13 21:43, Bruce Perens wrote:
>Are you sure you have done the maths correctly? My calculation on a napkin seams to indicate you'll end up with a resyncronisation-pattern that is weaker than what now on D-STAR, because of additional requirement of the connectionID.

Certainly shorter. Is it weaker? Yes, if you do what D-STAR does, which is use a long header sequence to lock its PLL, and that sequence contains no data and is thus thrown away.

What if we did not have to throw that sequence away, and could use it for data? We can store a buffer of the incoming samples, lock our software PLL on the bit shifts of data rather than header bits, and then go back in the sample buffer and recover all of the bits from before we had lock. Other platforms like D-STAR don't do this because they were designed for less computing capability on the handheld platform.

Now that we've done this, we only need a fixed data sequence to tell us where the frame begins, not to lock our PLL. So, we can use a much "weaker" sequence than D-STAR.

We lose the signal for a moment. Our PLL free-runs at the last known frequency with as much precision as our internal timebase can provide, which is enough for this purpose. When the signal comes back, we can recover the lock very quickly, and again we have that sample buffer so we don't lose bits that came in before we were sure of our lock. If frames are of fixed length, we even know at what time the frame start should come. We know what the entire 48-bit overhead sequence should be. We compare it against what we see at that time, and arrive at a probability that we are still copying the same station as we were before we lost signal.

Regarding FEC, let's theorize that we are able to use as much FEC as we currently use on HF. So, we have a 1600 bit per second signal. We add a 1:4 overhead to get 2000 bits per second. Oops, we're over 2 kHz. We apply a 0.7 spectrum efficiency factor for GMSK and get 2.5 kHz. I can live with that :-) We could go to C4FM for narrrower bandwidth, but that would lose us 3 dB S/N.

Regarding crossing layers, I think it's more of a concern in our combination of the codec and its FEC than it is for an RSSI value. The RSSI value is provided by an API and any layer that people drop in can call that API, or not. In the case of the FEC, it actually has knowledge of the underlying layer. But even in that case I am not seeing it as an unresolvable tangle that prevents modification. One can slot in an FEC that has no knowledge of the codec, if desired.

Thanks

Bruce

Bruce Perens

unread,
Jan 2, 2014, 1:47:49 PM1/2/14
to digita...@googlegroups.com, Kristoff Bonne
I am figuring a 6-frame latency for this system. One frame in the transmitter, it starts transmitting a frame as soon as the codec is finished. 4 frames for our incoming sample buffer, and we recover 4 speech frames and an overhead frame from that (even if we're seeing parts of two 4-frame sequences). One more frame to reproduce the codec data as audio. In a system with a repeater which recovers the frame and routing data before retransmission, we add another 4 frames latency to make 10 frames total latency. This is 1/3 second.

So, consider that a transmission has 6 frames for data at the tail, which is going to come in to the receiver while it's still reproducing the previous audio frames. At 1600, frames are 64 bits (I think). We have overhead, and in addition we need to do something to distinguish data from voice. So, say we end up with 256 bits for data and its FEC. With a 1:1 FEC we get 16 bytes room for data. This is enough room to send RSSI and other information on the tail of each transmission.

I remember we also have some room in unvoiced codec frames that we are not using, and that one-bit channel that we are using on HF.

Thanks

Bruce

Bruce Perens

unread,
Jan 2, 2014, 2:00:20 PM1/2/14
to digita...@googlegroups.com, Kristoff Bonne
We could potentially get latency down to 2 frames if we speed up the speech timing once we have lock. If we take two seconds to do that, probably nobody would notice. Losing lock would mean inserting silence and then catching up again. It's complicated enough that I would not do it in the initial implementation.

James Hall

unread,
Jan 2, 2014, 2:04:29 PM1/2/14
to digita...@googlegroups.com
I'm hoping the GMSK mode can be made versatile. What if someone wants to make a mode where you compose a voice mail sort of thing on your radio and then others can digipeat it out of an area with no repeater coverage. Another similar use would be a pre-recorded voice message keyed to GPS coordinates. Could be an application where you record such messages with your current location and then send them.. when that person gets there, the message plays back for them. How about handling streams where there is no voice data, but only objects such as a JPG picture, a text message or a stream of JSON data? As I understand it, D-Star is made only for real-time voice data and any other data is just a side effect. Out of the "1200" bps for data, you get less than that to actually use. It's been a while since I read it but I think it turns out to be 761 bps or close to it. When you're just sending data, it packs a bunch of empty voice frames to fill out the 4800 baud stream, wasting a lot of power. If my math is right, it takes 6 times longer to send the same data as it would if you could use the whole 4800 baud stream. And there's no FEC on the data frames either. Will this new protocol end up doing the same thing? Sending empty voice frames and wasting power? I think there's a lot of good reasons to interleave data also, but there's a lot of good reasons to say "This is for data" and let it use the whole bandwidth when there's no real-time voice to worry about. If at all possible.

Just some things to think about.

On Thu, Jan 2, 2014 at 9:55 AM, Kristoff Bonne <kris...@skypro.be> wrote:

What if somebody wants to implement a different voice-codec, or encrypted voice, or raw data. Say somebody is interested in 10 meter / VHF-low / VHF DXing and want to addapt the protocol for long-term fading.
If you start mixing layers, there isn't that much difference between this protocol and any of the "buy-and-use" protocols (D-STAR, DMR or dPMR). You cannot addapt such a protocoll without possibility producing a lot of R2D2 on other radios that might happen to pick up the signal.

BAD IDEA!!!


You advocate a new type of HT radio that would allow people to "experiment, based on apps". But if you do not provide the underlaying protocol to support that, the only "apps" you are going to have are "low-bitrate voice", "low-bitrate voice", "low-bitrate voice" and ... "low-bitrate voice".


If I want a cheap DV radio I just want to *use*, I'll get myself one of these cheap Chinese sub-100 dollar dPMR radios. dPMR does allow me create a new protocol  and use it next on the same frequencies as the default dPMR stack ... and it can be done on any normal FM-radio with a 9k6 audio port.





73
kristoff - ON1ARF


--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice+unsubscribe@googlegroups.com.

John D. Hays

unread,
Jan 2, 2014, 2:09:39 PM1/2/14
to digita...@googlegroups.com
D-STAR has about 1/2 second of latency and people notice.



John D. Hays
K7VE
PO Box 1223, Edmonds, WA 98020-1223 
  


--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.

Kristoff Bonne

unread,
Jan 2, 2014, 3:02:57 PM1/2/14
to digita...@googlegroups.com
James,



For me, the reason to get involved into digital voice based on codec2 is that it must be a platform to allow hams to learn how digital voice and digital communication works, and be able to experiment with it.


Althou digital communication is on its lowest level about ones and zero's, actually creating a digital voice system is far from "digital". One of the things where digital fundamentally differs from analog is that it has a lot more buttons and knobs you can tune: there are many different parameters in a digital communication system then in an analog radio-path.

And, the interesting thing is, if you change one, you will almost always effect others: interleaving affects latency, bitrate affects syncronisation-time, etc.



So you cannot learn how digital communication really works, without being able to experiment with it.

Digital communication can be addapted to suit all kinds of different radio-channel characteristics: a local QSO on a UHF/SHF repeater using a handheld radio is different from operating a remote repeater 80 km away over VHF low meter, from doing Es DX on 10 meter, from operating satellite or from doing NVIS on 40, 60 or 80 meter.
An application like "voice" has different requirements from "texting", and machine-to-machine "bulk" data-transfer is different then an interactive chat session between two persons.


C2gmsk exists in two variants: 2400 and 4800 bps. It begins with a version-id and support -in the current code- up to 16 types of data, one of them pre-defined as "private/experimental". (if that is not an invitation to experiment, I don't know).
It has always been the goal to allow (say) somebody to rewrite the code and use it for -say- a 4800 bps raw data-pipe, or -as I already mentioned- allow somebody who is interested in VHF DXing to addapt it to suit her needs better (e.g. by implementing multiframe interleaving).



If that cannot be done, even at the cost of adding some more identification-fields in the bitstream and therefor increasing the bitrate, then you have just created another (boring) "D-STAR".


73
kristoff - ON1ARF
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.

To post to this group, send email to digita...@googlegroups.com.
Visit this group at http://groups.google.com/group/digitalvoice.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.

Bruce Perens

unread,
Jan 2, 2014, 4:19:52 PM1/2/14
to digita...@googlegroups.com, Kristoff Bonne
The way I am addressing development is to make a 100% software implementation. That way, if you want to send a digital text message, you change the program. The first platform I am planning to put this on is an SDR one, so again if you want more bandwidth you change the software.

The problem with placing big empty data channels alongside of the voice is that we pay for them all of the time. With decreased range, increased bandwidth use, and shorter battery life. We have the power to turn on C4FM or otherwise change the modulation when we want to experiment, without paying the price when we don't.

Thanks

Bruce

Matthew Pitts

unread,
Jan 2, 2014, 11:52:43 PM1/2/14
to digita...@googlegroups.com
James,

Actually, you're only partly correct; there are three parts to the D-STAR protocol; Digital Voice (with slow data), Digital Data (GMSK encapsulated Ethernet) and Analog Bridge mode. DV Slow Data mode was intended for GPS, packet like text, and similar things that weren't absolutely critical. The apparent intention was that you could have a radio with the 4800 bps rate that could switch between voice and data as needed, but Icom seemingly misunderstood the wording of the DD part and only implemented that as a super high speed system on 1.2 GHz.

Matthew Pitts
N8OHU
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Kristoff Bonne

unread,
Jan 3, 2014, 2:34:50 AM1/3/14
to digita...@googlegroups.com
Bruce,


One small comment:


On 02-01-14 19:09, Bruce Perens wrote:
> Kristoff Bonne <kris...@skypro.be> wrote:
>> Bruce,
>>
>>
>> On 31-12-13 21:43, Bruce Perens wrote:
>> Are you sure you have done the maths correctly? My calculation on a napkin seams to indicate you'll end up with a resyncronisation-pattern that is weaker than what now on D-STAR, because of additional requirement of the connectionID.
> Certainly shorter. Is it weaker? Yes, if you do what D-STAR does, which is use a long header sequence to lock its PLL, and that sequence contains no data and is thus thrown away.
> What if we did not have to throw that sequence away, and could use it for data? We can store a buffer of the incoming samples, lock our software PLL on the bit shifts of data rather than header bits, and then go back in the sample buffer and recover all of the bits from before we had lock. Other platforms like D-STAR don't do this because they were designed for less computing capability on the handheld platform.
(...)


Aren't you mixing up protocol design with implementation?

Power/CPU limitations can be a design-criteria for a protocol, but "I
found this software trick I want to implement" should not.




> Thanks

Kristoff Bonne

unread,
Jan 3, 2014, 3:02:08 AM1/3/14
to digita...@googlegroups.com
Bruce,


There is also a different way to look at this:
For implement a protocol that is sufficient open and easily to extended as an experimentation toolkit. After that, and after doing sufficient testing, go for specialised protocols for one particular application.


It is a lot easier for you to take an existing code-based of a known working protocol-stack and "#ifdef" out a subset of it, then it for somebody else to have to implement additional features on top of it.




The ham-community (in general) has a problem with digital voice: not for using it, but the basic knowledge of how it works and how to implement it.
How many people are actually doing work on this and writing code?

If you just design and implement a protocol for some very specific and limited system, just to implements one particular application, the net result is exactly the same thing as another D-STAR. The fact that the source-code is published will not change anything about that. There is no need for another "buy-and-use" protocol. There are currently already 4 of them out there!


What the ham-community needs is a DV equivalent of the arduino, gertboard, <some-dev-board>: an easy to use and flexible kit to allow people to gain access to, learn about and experiment with all aspects of electronics. (just replace the word "electronics" with "Digital Voice").
.
And yes, the arduino has a lot of stuff that you might not need in the final application. A board like the gertboard even more. But it allows people to learn about electronics and that is what it is about.

And if the only thing you need is something to blick a LED in the dark, they can you build their own little board with an ATtiny without all the "blob" of the arduino.
That is a lot more easy then providing the scematics and C-code for a "blink a LED if the light is out" on AVR and say "OK, here is the source-code and it is now up to you to design a WSPR signal-generator with a AD9850 based on that".



Yes, this is a lot more work and a lot less "sexy" then designing a SDR-radio, but in the long run, it will help the ham-community much more!



73
kristoff - ON1ARF
Sent from my Android phone with K-9 Mail. Please excuse my brevity. --

Bruce Perens

unread,
Jan 3, 2014, 1:22:50 PM1/3/14
to digita...@googlegroups.com, Kristoff Bonne


Kristoff Bonne <kris...@skypro.be> wrote:

> Aren't you mixing up protocol design with implementation? Power/CPU limitations can be a design-criteria for a protocol, but "I found this software trick I want to implement" should not.

All working protocol designs are built to the implementation capabilities of the time. The use of GMSK is obviously dependent on having a PLL and a Gaussian filter. PLLs were once very difficult to implement, which was evident in the tube count of early televisions and their propensity to lose lock and require tweaking of the vertical and horizontal controls (which I remember well). So, nobody would have considered a communications protocol that needed them if there was any other way to do it. And consider building complex filters as discrete analog implementations. Now, we use them with impunity. Most advances will come from making use of newly available capability.

>There is also a different way to look at this: For implement a protocol that is sufficient open and easily to extended as an experimentation toolkit. After that, and after doing sufficient testing, go for specialised protocols for one particular application.

If we are able to provide basic softmodems of various rates, protocols will be implemented atop them. Providing a fast-lock mechanism that doesn't require a long header will not prevent this from happening.

But I think that David has shown us a critical lesson with his HF implementation. You can get really significant gains if you don't design your voice communications the way you would design a data communications protocol. FEC-heavy implementations are necessary for digital data but their performance stinks compared to error-tolerant ones.

>What the ham-community needs is a DV equivalent of the arduino

Actually, I think they need a RF modulation equivalent of Arduino, not a DV equivalent. DV is one of the programs that you implement upon such a thing, but it is equally usable for data or maybe even radar (there's a really impressive global-range HF radar implementation on Hermes using chirp modulation). This would be a platform for implementing softmodems of various bandwidths and codecs of various types.

I am not aiming to provide them with a general purpose virtual wire for bits, where we implement the modem, packet protocol and FEC and they do all of their work on top of that. I am leaving it to some of them to implement the modem. Many of the implementations on this platform will indeed be a virtual wire for bits, but there will be many different kinds of them.

Thanks

Bruce

Kristoff Bonne

unread,
Jan 9, 2014, 2:45:30 AM1/9/14
to digita...@googlegroups.com
Hi Bruce,


This message has been stuck in my "draft" folder for more then a week.
So, ... with a little bit of delay!



On 03-01-14 19:22, Bruce Perens wrote:
>> Aren't you mixing up protocol design with implementation? Power/CPU limitations can be a design-criteria for a protocol, but "I found this software trick I want to implement" should not.
> All working protocol designs are built to the implementation capabilities of the time. The use of GMSK is obviously dependent on having a PLL and a Gaussian filter. PLLs were once very difficult to implement, which was evident in the tube count of early televisions and their propensity to lose lock and require tweaking of the vertical and horizontal controls (which I remember well). So, nobody would have considered a communications protocol that needed them if there was any other way to do it. And consider building complex filters as discrete analog implementations. Now, we use them with impunity. Most advances will come from making use of newly available capability.
You know, this makes me think about somebody who gets his first arduino
and says:

"OK, that blink application, that is all very nice but that's the same
thing that everybody else is doing. It is not very innovative, is it?
So, let's make this a bit better. If we know the characteristics of the
LED and -instead of just switch on the LED- we can switch it on and off
very rapidly. If we pass just the correct amount of current through the
LED ... we can remove one resistor. Wouldn't that be neat for a first
application?"


What you are trying to do to implement the features of a multicarrier
modem on a single carrier syste
m.
Now, I am not saying that the assumptions in which you build your
specifications are wrong, but are we in the position to say that we have
done the needed tests that show that the assumptions are indeed correct?



FreeDV does have a dedicated carrier for bit-syncronisation and
frequency-reference, running at double the power of the normal
data-carriers and using DPSK instead of QPSK. That does say something
about the important of syncronisation, no? (It's the same kind of
technique as used in (say) DVB-T)

Shouldn't we first try to get something in the air that simply works? At
least, this would give us the possibility to actually get some
experience with codec2 based GMSK and simply collect data?
Currently, our experience with codec2 DV over GMSK is .... ZERO!





> But I think that David has shown us a critical lesson with his HF implementation. You can get really significant gains if you don't design your voice communications the way you would design a data communications protocol. FEC-heavy implementations are necessary for digital data but their performance stinks compared to error-tolerant ones.
That is true. That's why you have things like AAC-ER (error-resiliant)
audio.
https://en.wikipedia.org/wiki/Advanced_Audio_Coding#Error_Resilient_.28ER.29_AAC


And that's also the difference between the design-criteria for a
voice-codec for radio (like codec2 and AMBE) and one for (say) the voip
(like iLBC).
(that was part of my talk on "codec2 and DV over radio" I have at HSB in
september. :-) )


But, good telecommunication-systems use a mix of techniques to try to
get the best out of everything. Using an error-resilent codec is only
part of the sollution.
Quite a few systems use multiple layers of protection, using a UEP
(unequal error-protection) on the inner layer and with "normal" FEC on
top of that.

Using a FEC has a number of advantages, one of them being that it
provides information about bit-error rate and it makes the digital
"cliff" between good-audio and bad-audio steeper.
I found an interesting reference in the wikipedia on T-DAB+ that
listener-tests have indicated that people really prefered a the steep
cliff of T-DAB+ (which uses AAC) over T-DAB (which uses mpeg layer II).
(I am currently on the train so I cannot provide the link).


Any reasonable FEC should provide you with a gain of over 2.1 db for a
FEC of 1/2. So, concidering the -3 db gain for the twice as large
bandwidth, the "cost" of a outer-layer FEC is less then 1 db.
But, you then have to add this:
- the better digital cliff of using a FEC
- the additional information on BER
- the better performance of GMSK at a higher bitrate over normal
(non-SDR) radios
- the fact that 99.9999% of the ham-community uses FM-transmitters for
GMSK, and not the SDR-radio you mention

All these things are all untested by the ham-community.

As said, let's try to walk before we can run. Currently, we haven't even
started to crawl.


>> What the ham-community needs is a DV equivalent of the arduino
> Actually, I think they need a RF modulation equivalent of Arduino, not a DV equivalent. DV is one of the programs that you implement upon such a thing, but it is equally usable for data or maybe even radar (there's a really impressive global-range HF radar implementation on Hermes using chirp modulation). This would be a platform for implementing softmodems of various bandwidths and codecs of various types.

The term "arduino" was about providing a tool to allow people to *learn*
about something and have gain experience *developing* things.

The key to the success of the arduino is its easy of use (as a
development platform) and the fact it is widely available for everybody.
SDR radios are not something that the ham-community has and -unless you
can find a way to start producing/selling thousands of them every
month- this is not going to change in the next couple of years.


If you create a protocol-stack that requires a new (x-hunderd dollar)
handheld, then you have practically created another D-STAR. If you
create a protocol-stack that allows people to use and learn using the
equipement they have now, you have an RF-equivalent of the arduino.


> Thanks

Steve

unread,
Jan 9, 2014, 8:07:26 AM1/9/14
to digita...@googlegroups.com
It seems to me, to start with something useful, it might be a GMSK soundcard modem.  Forget the upper layers, just export a PCM Sign+15 bits and mod/demod a baseband audio.  After that is working, the rest is just data and protocol :-)

Forget the monolith, use building blocks. I think all OS's are now multiprocessing/multithreading now.

Bruce Perens

unread,
Jan 9, 2014, 2:15:59 PM1/9/14
to digita...@googlegroups.com, Steve
Chris Testa is making good progress with Whitebox, which is a complete SDR transceiver, not just a modem.

Obviously I will test my theories on it. I hope to have a working system at Hamvention.

Steve <coupay...@gmail.com> wrote:
It seems to me, to start with something useful, it might be a GMSK soundcard modem.  Forget the upper layers, just export a PCM Sign+15 bits and mod/demod a baseband audio.  After that is working, the rest is just data and protocol :-)

Forget the monolith, use building blocks. I think all OS's are now multiprocessing/multithreading now.


Bill Vodall

unread,
Jan 9, 2014, 2:31:30 PM1/9/14
to digita...@googlegroups.com
> Chris Testa is making good progress with Whitebox,

Any idea when it will be generally available?

Bill

Bruce Perens

unread,
Jan 9, 2014, 3:19:21 PM1/9/14
to digita...@googlegroups.com, Bill Vodall
We might would probably an experimenters board in mid to late 2014 and a high-end HT in 2015. Incidentally, a lot of the AMBE patents expire in 2015, so having multi-platform interoperability gets easier.

Bruce Perens

unread,
Jan 9, 2014, 3:24:30 PM1/9/14
to digita...@googlegroups.com, Bill Vodall
Sorry about the typos. We would probably have an experimenters board in 2014 which is a QRP SDR transceiver. The full HT integrates an Android touchscreen platform, the same stuff as the experimenters board, amplifiers and filters. Both devices are meant to be open development platforms. They will come with sufficient functionality but are meant to have extensive community development.

Steve

unread,
Jan 9, 2014, 3:32:52 PM1/9/14
to digita...@googlegroups.com, Steve
I am glad to hear he is still working on it.  It sure went quiet on blog/facebook page last year.

I'm all in favor of IQ modulator/demodulators at VHF :-)
I think a neat interface board would be a two port USB (IQ in/IQ out) including PTT/COS to/from baseband audio.

jdow

unread,
Jan 9, 2014, 5:25:42 PM1/9/14
to digita...@googlegroups.com
For that matter what is the status on the MELP and CELP patents? If they are
expired having a collection of codecs to work with might be a good thing.

{^_^}

Bruce Perens

unread,
Jan 9, 2014, 5:59:56 PM1/9/14
to digita...@googlegroups.com, jdow
I don't know which patents cover those codecs. It's going to take a while just to make sense of the DVSI patents.

jdow

unread,
Jan 9, 2014, 7:10:52 PM1/9/14
to digita...@googlegroups.com
To answer my own question it appears basic CELP is long out of patent.
Many of the variants may also be out of patent. MELP appears to be a thing
of 1995 or so. So it's die to fade out in the next few years. And it
appears the patents surrounding MELP may be drawn on very narrow grounds.
MELP is still a minefield. It may remain so for another few years.

{^_^} Joanne

Kristoff Bonne

unread,
Jan 10, 2014, 2:53:39 AM1/10/14
to digita...@googlegroups.com
All,




On 09-01-14 08:45, Kristoff Bonne wrote:
Using a FEC has a number of advantages, one of them being that it provides information about bit-error rate and it makes the digital "cliff" between good-audio and bad-audio steeper.
I found an interesting reference in the wikipedia on T-DAB+ that listener-tests have indicated that people really prefered a the steep cliff of T-DAB+ (which uses AAC) over T-DAB (which uses mpeg layer II). (I am currently on the train so I cannot provide the link).






Any reasonable FEC should provide you with a gain of over 2.1 db for a FEC of 1/2. So, concidering the -3 db gain for the twice as large bandwidth, the "cost" of a outer-layer FEC is less then 1 db.
But, you then have to add this:
- the better digital cliff of using a FEC
- the additional information on BER
- the better performance of GMSK at a higher bitrate over normal (non-SDR) radios
- the fact that 99.9999% of the ham-community uses FM-transmitters for GMSK, and not the SDR-radio you mention


This is the complete article. It also contains some interesting information on multiple-layer FEC.


--- cut here --- cut here --- cut here ---- cut here ---

Error-correction coding

Error-correction coding (ECC) is an important technology for a digital communication system because it determines how robust the reception will be for a given signal strength - stronger ECC will provide more robust reception than a weaker form.

The old version of DAB uses punctured convolutional coding for its ECC. The coding scheme uses unequal error protection (UEP), which means that parts of the audio bit-stream that are more susceptible to errors causing audible disturbances are provided with more protection (i.e. a lower code rate) and vice versa. However, the UEP scheme used on DAB results in there being a grey area in between the user experiencing good reception quality and no reception at all, as opposed to the situation with most other wireless digital communication systems that have a sharp "digital cliff", where the signal rapidly becomes unusable if the signal strength drops below a certain threshold. When DAB listeners receive a signal in this intermediate strength area they experience a "burbling" sound which interrupts the playback of the audio.

The new DAB+ standard has incorporated

Reed-Solomon ECC as an "inner layer" of coding that is placed around the byte interleaved audio frame but inside the "outer layer" of convolutional coding used by the older DAB system, although on DAB+ the convolutional coding uses equal error protection (EEP) rather than UEP since each bit is equally important in DAB+. This combination of Reed-Solomon coding as the inner layer of coding, followed by an outer layer of convolutional coding - so-called "concatenated coding" - became a popular ECC scheme in the 1990s, and NASA adopted it for its deep-space missions. One slight difference between the concatenated coding used by the DAB+ system and that used on most other systems is that it uses a rectangular byte interleaver rather than Forney interleaving in order to provide a greater interleaver depth, which increases the distance over which error bursts will be spread out in the bit-stream, which in turn will allow the Reed-Solomon error decoder to correct a higher proportion of errors.

The ECC used on DAB+ is far stronger than is used on DAB, which, with all else being equal (i.e. if the transmission powers remained the same), would translate into people who currently experience reception difficulties on DAB receiving a much more robust signal with DAB+ transmissions. It also has a far steeper "digital cliff", and listening tests have shown that people prefer this when the signal strength is low compared to the shallower digital cliff on DAB.[10]

--- cut here --- cut here --- cut here ---- cut here ---



Concidering the low bitrate needed by codec2 voice encoding and the much higher bitrate that can be provided by GMSK on VHF/UHF, this looks like an interesting field to investage.

As this does not require GMSK at low-low bitrates, this could makes the use of SDR radios irrelevant, so this much better matches the equipement ham-radio operators now have these day.




73
kristoff - ON1ARF

Adrian Musceac

unread,
Jan 13, 2014, 5:27:11 AM1/13/14
to digita...@googlegroups.com
Hi all,
How would you go about sending the bitstream over UDP? Would you split it into
datagrams and reassemble it at the receiver?
Would it be better to simply send the audio, uncompressed? I suppose a voice
codec would totally mess the GMSK symbols right? How about a lossless codec?

(I'm working on a RoIP application, and I'd like to make it compatible with
digital voice.)

Cheers,
Adrian, YO8RZZ

Stuart Longland (VK4MSL)

unread,
Jan 13, 2014, 5:46:42 AM1/13/14
to digita...@googlegroups.com
On 13/01/14 20:27, Adrian Musceac wrote:
> On Thursday, January 09, 2014 15:07:26 Steve wrote:
>> It seems to me, to start with something useful, it might be a GMSK
>> soundcard modem. Forget the upper layers, just export a PCM Sign+15 bits
>> and mod/demod a baseband audio. After that is working, the rest is just
>> data and protocol :-)
>>
>> Forget the monolith, use building blocks. I think all OS's are now
>> multiprocessing/multithreading now.
>
> Hi all,
> How would you go about sending the bitstream over UDP? Would you split it into
> datagrams and reassemble it at the receiver?

You wouldn't, not with Codec 2 unless latency was not of any concern.

Codec 2 is so heavily compressed, unless you ran at 3200bps and buffered
250msec of audio per packet, you'd send more header information in your
packets than data.

The only applications I see for Codec 2 over TCP/IP are trunking and
voice mail.

Regards,
--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
...it's backed up on a tape somewhere.

Kristoff Bonne

unread,
Jan 13, 2014, 5:50:49 PM1/13/14
to digita...@googlegroups.com
Hi all,



On 13-01-14 11:46, Stuart Longland (VK4MSL) wrote:
> Codec 2 is so heavily compressed, unless you ran at 3200bps and
> buffered 250msec of audio per packet, you'd send more header
> information in your packets than data. The only applications I see for
> Codec 2 over TCP/IP are trunking and voice mail. Regards,

Has anybody tried running codec2 voice over this?
http://dx.com/p/rf4432se-si4432-433mhz-wireless-rf-transceiver-module-155701

Looks like a nice device to try to create a very low-power hotspot for
around the house.


This device is nice as it can tune for 430 to 439 Mhz! Ideal for
ham-radio applications.



73
kristoff - ON1ARF

Steve

unread,
Jan 13, 2014, 10:46:10 PM1/13/14
to digita...@googlegroups.com
One thing to keep in mind about the Internet, is you have a lot of bandwidth, and if you are exchanging a phone quality voice, then you want to use a codec that has good fidelity.  On the other hand, if you are sending digital voice over a dial-up modem, then yes, you probably will accept a lower fidelity voice.

Typically you would add a data structure header, and send a fixed number of codec packets per UDP frame.  The data structure containing say, the version number,  maybe a sequence number, then the codec bytes.  A good number is say 33 codec data frames of 8 bytes each (GSM).

The reason GSM is often used, is the source is available, and is 13 kbps, or 1/5 the telephone quality bandwidth.  Sending 64kbps (both ways), even over the Internet, is a bit much, and your ISP will probably want to bill you at the commercial rate.

You might download the old echolinux archive on sourceforge.  It uses GSM as the codec, but could be converted easily to codec2.

Karel Fassotte

unread,
Jan 14, 2014, 12:58:54 AM1/14/14
to digita...@googlegroups.com
Hello, this has not directly to do with the item your writing about but I would like to know it the modem is available for data only.
Where can I get the sourcecode?
I want to experiment with the modem using several speeds on shortwave. I know its not what most amateurs want but I just want to try. I have a serial tone modem, but that won't do more that 2400 bs, although the modem should permit speeds up to 8kbs without compression.
Greetings
Karel Fassotte
HC1AKP/PE2KFA



2014/1/13 Kristoff Bonne <kris...@skypro.be>
--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice+unsubscribe@googlegroups.com.

Tony Langdon

unread,
Jan 14, 2014, 5:01:42 AM1/14/14
to digita...@googlegroups.com
On 14/01/2014 2:46 PM, Steve wrote:
> You might download the old echolinux archive on sourceforge. It uses
> GSM as the codec, but could be converted easily to codec2.
That's pretty much obsolete code. Why not use thelinkbox code as
something more current? or SVXlink?

--
73 de Tony VK3JED/VK3IRL
http://vkradio.com

Reuven Z Gevaryahu

unread,
Jan 14, 2014, 10:02:19 AM1/14/14
to digita...@googlegroups.com
Also consider Speak Freely. Before the days of Skype and whatnot (the mid to late 90s) that was the semi-standard internet voice application. It had a number of codecs included, and was "abandoned" to sourceforge (http://sourceforge.net/projects/speak-freely/) about 10 years ago (see http://www.fourmilab.ch/netfone/windows/speak_freely.html). Somebody posted a patch that added speex support (http://www.2pi.info/software/sf_speex/), but little has been done since then. It could use some NAT-traversal help; I think the methods to do that have been significantly improved since 10 years ago. 

--Reuven (KB3EHW)

Steve

unread,
Jan 14, 2014, 3:00:17 PM1/14/14
to digita...@googlegroups.com
It was just a suggestion. There's a lot of example code out there I would think. I figured anything after echolinux is probably going to have a lot more add-ons, and then it's hard to find the beef.

SpeakFreely is probably not a good choice, as it was a DOS program, if I recall right, sans multi-tasking and multi-threading

James Hall

unread,
Jan 14, 2014, 3:17:11 PM1/14/14
to digita...@googlegroups.com
16 bit Windows actually


--
You received this message because you are subscribed to the Google Groups "digitalvoice" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalvoice...@googlegroups.com.

Adrian Musceac

unread,
Jan 14, 2014, 3:25:56 PM1/14/14
to digita...@googlegroups.com
Hello everybody,
Thanks for your suggestions. I need to expand a bit on my question.
The codebase I'm working on allows RoIP using either Opus ( http://www.opus-
codec.org/) or Codec2 for very slow links.
Since I have control over both the server and the client, I can pretty much
transmit any information over the TCP/IP link, with or without internet
access, since the server is agnostic about the type of data it is forwarding.

Now, let's say we have a digital transmission using Codec2 as a vocoder.
Since it allows for a header to be sent with the packets, once demodulated by
the GMSK modem, the stream could be sent to a custom destination. A bit like
DMR talkgroups, if you want. No need to decode Codec2 and re-encode to another
format. The digital voice packets can get on the air at the other end just
like they were transmitted, by applying the GMSK modulation.
How does this sound?

One option would be to have endpoints which are analogic. At these specific
endpoints, the Codec2 stream would be reassembled, decoded into classic PCM
audio and sent over FM, for example.
All it needs is a custom header specifying the kind of treatment the voice
packet should get at the endpoint, and perhaps a specific endpoint as a
destination.
This way, by setting a destination on the transceiver side, only one network
station would forward the packets to the radio interface.

I hope I provided enough details, but I'll be back!

Cheers,
Adrian, YO8RZZ

Adrian Musceac

unread,
Jan 14, 2014, 3:34:52 PM1/14/14
to digita...@googlegroups.com
On Tuesday, January 14, 2014 17:02:19 Reuven Z Gevaryahu wrote:
> Also consider Speak Freely. Before the days of Skype and whatnot (the mid
> to late 90s) that was the semi-standard internet voice application. It had
> a number of codecs included, and was "abandoned" to sourceforge
> (http://sourceforge.net/projects/speak-freely/) about 10 years ago (see
> http://www.fourmilab.ch/netfone/windows/speak_freely.html). Somebody posted
> a patch that added speex support (http://www.2pi.info/software/sf_speex/),
> but little has been done since then. It could use some NAT-traversal help;
> I think the methods to do that have been significantly improved since 10
> years ago.
>
> --Reuven (KB3EHW)
>

Hi,

You mentioned NAT traversal. I had to face the same problem here. Consider 3
clients for a RoIP network, sending back and forth voice packets as a Codec2
stream.
If two of the clients are behind the same NAT device, or in some sort of VPN,
the issue becomes more evident. I think Echolink has the same problem.
The only way to avoid this that I could code easily was to always use an
external server. I will probably go in the future for a complete peer to peer
solution, but this requires a lot of months of studying the most recent
protocols.
For now I settled on the client/server variant, where the server is just
acting as a reflector for all clients connected. No direct peer to peer
connection is made. NAT traversal is not a problem anymore, but the downside
is that the server requires (n + n ^2) * bandwidth, where n is the number of
clients talking at the same time.
This is why, besides the high quality Opus codec, I added support for Codec2,
which incidentally makes it easier to act as a digital radio endpoint.

Cheers,
Adrian, YO8RZZ

Tony Langdon

unread,
Jan 14, 2014, 3:35:41 PM1/14/14
to digita...@googlegroups.com
On 15/01/2014 2:02 AM, Reuven Z Gevaryahu wrote:
> Also consider Speak Freely. Before the days of Skype and whatnot (the
> mid to late 90s) that was the semi-standard internet voice
> application. It had a number of codecs included, and was
> "abandoned" to sourceforge
> (http://sourceforge.net/projects/speak-freely/) about 10 years ago
> (see http://www.fourmilab.ch/netfone/windows/speak_freely.html).
> Somebody posted a patch that added speex support
> (http://www.2pi.info/software/sf_speex/), but little has been done
> since then. It could use some NAT-traversal help; I think the methods
> to do that have been significantly improved since 10 years ago.
Well, much of the functionality of Speak Freely that's relevant to ham
radio is already in thelinkbox - the Speak Freely transport is used by
IRLP, so thelinkbox supports that for compatibility. IRLP actually uses
a highly modified version of Speak Freely for Unix.

Tony Langdon

unread,
Jan 14, 2014, 3:39:04 PM1/14/14
to digita...@googlegroups.com
On 15/01/2014 7:00 AM, Steve wrote:
> It was just a suggestion. There's a lot of example code out there I
> would think. I figured anything after echolinux is probably going to
> have a lot more add-ons, and then it's hard to find the beef.
>
> SpeakFreely is probably not a good choice, as it was a DOS program, if
> I recall right, sans multi-tasking and multi-threading
Speak Freely was released for Windows and Unix (runs quite well on
Linux). Much of our existing ham VoIP technology is a derivative to
Speak Freely to some degree. IRLP is the closest derivative, using a
modified version of Speak Freely for Unix (and using the native Speak
Freely transport on). Echolink possibly also owes some of its heritage
to Speak Freely, though it's not as closely derived.

Tony Langdon

unread,
Jan 14, 2014, 3:40:27 PM1/14/14
to digita...@googlegroups.com
On 15/01/2014 7:17 AM, James Hall wrote:
> 16 bit Windows actually
And later released for 32 bit Windows - I'm running it on a 64 bit
Windows system (where 16 bit programs won't run).
It is loading more messages.
0 new messages