Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

FPGA for large HDMI switch

837 views
Skip to first unread message

David Brown

unread,
Apr 2, 2013, 11:27:07 AM4/2/13
to
I am working on a project that will involve a large HDMI switch - up to
16 inputs and 16 outputs. We haven't yet decided on the architecture,
but one possibility is to use one or more FPGAs. The FPGAs won't be
doing much other than the switch - there is no video processing going on.

Each HDMI channel will be up to 3.4 Gbps (for HDMI 1.4), with 4 TMDS
pairs (3 data and 1 clock). That means 64 pairs in, and 64 pairs out,
all at 3.4 Gbps.


Does anyone know of any FPGA families that might be suitable here?

I've had a little look at Altera (since I've used Altera devices
before), but their low-cost transceivers are at 3.125 Gbps - this means
we'd have to use their mid or high cost devices, and they don't have
nearly enough channels. I don't expect the card to be particularly
cheap, but I'd like to avoid the cost of multiple top-range FPGA devices
- then it would be much cheaper just to have a card with 80 4-to-1 HDMI
mux chips.

Thanks for any pointers,

David

Matt L

unread,
Apr 4, 2013, 3:08:47 PM4/4/13
to
You cannot do what you desire in an FPGA, even if one existed with 64 high speed serdes at sufficient speed and cost. What you seek is a serial crosspoint switch. Look at vendors like Mindspeed.

David Brown

unread,
Apr 4, 2013, 5:31:59 PM4/4/13
to
Thanks for that hint. I got another reply suggesting a crosspoint
switch - I will look at Mindspeed too now.

mvh.,

David

jone...@comcast.net

unread,
Apr 8, 2013, 9:13:41 AM4/8/13
to

Matt, can you elaborate on why the OP cannot do this in an FPGA, if a suitable FPGA is available & cost-effective?

I completely understand that it may be highly unlikely that it can be done in a cost-effective FPGA, but you excluded that as a reason in your reply.

Andy

thomas....@gmail.com

unread,
Apr 8, 2013, 11:58:34 AM4/8/13
to
You might consider to use 16 external receivers and 16 external transmitters and use the FPGA to mux the databuses. There are some Rx/Tx that support DDR on the databuses, so this will get you 16pins per Rx/TX (12b+HD+VD+DE+Clk) x 32 = 512 Pins Total. There are at least low cost Cyclone IV that have so many IOs (CE30/CE40).

But I have not checked if this DDR-style Rx/Tx are also available for HDMI1.4 and how this solution compares to this crosspoint switches.

Regards,

Thomas

David Brown

unread,
Apr 9, 2013, 5:02:08 AM4/9/13
to
Unfortunately, the numbers are bigger than that. HDMI receivers and
transmitters that I have seen have SDR on the databus, but for HDMI1.4
that would be 36 lines at 340 Mbps. So for 16 channels in and 16
channels out, that would be 36*16*2 = 1152 pins, all running at 340
Mbps. That's a lot of pins - and even if we got an FPGA big enough,
designing such a board and getting matched lengths on all the lines
needed would be a serious effort.

The crosspoint switches mentioned by another poster are one likely
choice. The other realistic architecture is to use large numbers of
4-to-1 HDMI multiplexers.

matt....@gmail.com

unread,
Apr 19, 2013, 6:15:03 PM4/19/13
to
Andy,

There are two approaches to doing this in an FPGA. The OP is looking at one that would bring the TMDS and clock lines directly to the FPGA (assuming appropriate equalization / level shift / drivers on PCB). An FPGA cannot provide a simple crosspoint function internally, thus one would have to put 16 instances of HDMI RX and 16 instances of HDMI TX cores in the device and create the crosspoint in the fabric. My personal opinion is that the number of cores, the clocking resources, and logic required would make this a futile exercise.

The second approach is mentioned by Thomas already. This at least keeps the HDMI PHY and RX/TX stack out of the FPGA, but will require quite a bit of IO depending on OP needs.

To attempt either will require a large and costly FPGA, I think the OP will find the crosspoints cheaper in the end.

-Matt

David Brown

unread,
Apr 21, 2013, 8:18:50 AM4/21/13
to
Yes, that's pretty much the same conclusion as we came to (after you and
another off-list poster suggested crosspoints) - and we are now using
such a crosspoint switch on the board.

Thanks to all for their suggestions.

David

jone...@comcast.net

unread,
Apr 22, 2013, 2:27:26 PM4/22/13
to
Matt,

OK, it's an economic issue, not a technical issue.

Thanks,

Andy

David Brown

unread,
Apr 22, 2013, 2:46:28 PM4/22/13
to
I think it's a bit of both - when looking at the numbers of I/O's
needed, I don't think there are FPGA's big enough on the market. Had it
been 8x8 rather than 16x16, it would perhaps have been an economic
issue. But with 16x16, we would need 64 inputs at 3.4 Gpbs and 64
outputs at 3.4 Gpbs - I don't think there are any FPGAs that have that
many high-speed channels. And if we use external encoder/decoder chips,
the speeds per line are lower but we would need far more of them.

Certainly in principle an FPGA can be used for an HDMI cross-point
switch, but it seems that it is not a practical solution for such a big
switch.

jone...@comcast.net

unread,
Apr 22, 2013, 7:36:33 PM4/22/13
to
Matt,

This is not a question of practical/economic consideration, per your original statement.

Altera Stratix V GX B series has 66 full-duplex, 14.1 Gbps transceivers with independent Rx/Tx PLLs (e.g. 66 inputs, 66 outputs), and 490K-952K logic elements for an x-bar.

Probably not cost effective, but technically feasible.

Andy

David Brown

unread,
Apr 23, 2013, 4:48:19 AM4/23/13
to
On 23/04/13 01:36, jone...@comcast.net wrote:
> Matt,
>
> This is not a question of practical/economic consideration, per your
> original statement.
>
> Altera Stratix V GX B series has 66 full-duplex, 14.1 Gbps
> transceivers with independent Rx/Tx PLLs (e.g. 66 inputs, 66
> outputs), and 490K-952K logic elements for an x-bar.

I didn't realise the Rx and Tx sides of the transceivers could operate
independently - that's why I dismissed these as too small.

>
> Probably not cost effective, but technically feasible.

Well, if it is possible to buy these devices, then I agree.

>
> Andy
>

jone...@comcast.net

unread,
Apr 23, 2013, 9:29:12 AM4/23/13
to
Arrow shows 5SGXEB6R2F40C3N in stock @ $9,092.00 ea (min/multiple = 1).

Very likely not cost effective...

Andy

Morten Leikvoll

unread,
Apr 23, 2013, 12:52:14 PM4/23/13
to
Not a fgpa solution, but have a look at analog devices ADN4605 and its
likes.. A few of those and you got full matrix of even more ports.

GaborSzakacs

unread,
Apr 23, 2013, 3:23:51 PM4/23/13
to
Probably not a good solution at 340 Mbps, but when you have a large
parallel bus and need a number of these in a crossbar, you can split
the bus into bit slices and handle them in separate smaller and
much cheaper devices. Generally, using a very high pin-count FPGA
with very little logic is a big waste of silicon. For something as
regular in structure as a parallel-bus crossbar, splitting the bus
into slices can reduce the silicon area by using a number of FPGA's
programmed identically each handling the same slice from every port
on the crossbar. The problem at very high speeds would be part to
part skew. You can control voltage and temperature among the parts,
but you're at the mercy of the manufacturer for process variation.

--
Gabor

Petter Gustad

unread,
Apr 24, 2013, 2:37:39 PM4/24/13
to
David Brown <david...@removethis.hesbynett.no> writes:

> needed, I don't think there are FPGA's big enough on the market. Had

It's possible to build build a clos style crossbar out of smaller
FPGA's, but you "waste" a lot of serdes links for switch expansion, e.g.
in the figure below each switch element could be a 4x4 FPGA which is
interconnected to form a 8x8 switch:

http://upload.wikimedia.org/wikipedia/commons/c/c9/Benesnetwork.png

//Petter

--
.sig removed by request.
0 new messages