“Let our rigorous testing and reviews be your guidelines to A/V equipment – not marketing slogans”
Facebook Youtube Twitter Google Plus instagram pinterest

Alternatives for AV Home Networking

by August 16, 2009

The ongoing convergence of AV and computing is inevitable, rooted in the dawn of digital media with the advent of the CD, and nurtured by the Internet.  Media servers, multimedia gaming consoles, HTPC, networked AV receivers, mp3 player docks, IPTV; digital entertainment is becoming as at home on computers as it is on traditional AV gear.  However, all of this crosspollination between the two often leaves entertainment stored in disparate locations so a reliable connection is required to transfer the entertainment files between devices.

There are two basic ways to go about networking devices together, wired and wireless.  Networking in the home has come to be dominated by wireless Wi-Fi methods for its apparent ease of installation.  Connections are mobile with no tethering to the wall and there is no need to pull specialized wiring through unseen and difficult to reach places in the home, just plug some transceivers into each device that needs a connection and instant network.

Wi-Fi is great for web surfing on a randomly located laptop, but it does have its drawbacks and it may not be the best choice for a home multimedia network with heavy streaming duties.

For those who find the shortcomings of Wi-Fi make it less than an ideal solution, there are other methods available that are not as onerous as pulling Ethernet cable through walls and attics by making use of existing home wiring systems.  These methods also provide connections that are more secure with better data throughput and reliability.  There is a reason that mission critical business systems and servers are primarily hardwired and not connected by Wi-Fi.

After experimenting and using a number of these alternatives, dating back to 2003, I would like to share what I know for the benefit of Audioholic’s readers who would like to gain a bit of networking know how.

Networking and Bit Rates

The discussion must start with what all the numbers mean, to define them specifically and make sure we are looking at comparable bit rate numbers from one networking standard to another.  It is a bit of a mess, and marketing, with the goal of painting a given product’s performance in the best possible light, only confuses the matter.

Quoted bit rates are not all equal, coming from different points in the transmission link, and without proper context they will suggest better or worse speeds when compared but are really apples to oranges.  Ultimately, all that matters to the end user is the data rates that they will get from using a device.  However, most networking products do not advertise this number for several reasons: because it can be extremely variable with the particulars of given network, and it can be significantly lower than idealized numbers from intermediate steps of the transmission chain.

We will start with the fastest data rates and work towards what will ultimately be seen in real world usage.

At the top is Gross bit rate, defined as the maximum number of bits that can be sent over the physical layer of a network, the bandwidth.  This includes everything that can be sent: usable data, signaling protocol, and overhead for error correction.  Providing this number to consumers will give the greatest overstatement of actual data transfer speed.

The net bit rate is the amount of useful data that can be sent absent error correction coding.  The ratio of useful information to error correction overhead is described as the code rate such that:

Gross Bit Rate · Code Rate ≥ Net Bit Rate

Throughput is the average amount of successful data delivery through a communication channel and is the net bit rate less data retransmission.  Detection of data errors between gross and net bit rate requires retransmission, which while not reducing the net bit rate per se, does increase the amount of time to get a complete transmission through a network.  Throughput can be viewed under several different circumstances:

  • Maximum theoretical throughput: maximum data transmission under ideal circumstances, correlates with gross bit rate
  • Maximum achievable throughput: ideal maximum with network protocol considered, correlates with net bit rate
  • Peak measured throughput: maximum on realistic system over a short time period, applicable to systems under variable loading
  • Maximum sustained throughput: maximum on realistic system over a longer transmission time, applicable to systems under more continuous loading

Net bit rates and maximum theoretical throughput are the numbers that are typically quoted in product marketing.  The idealized circumstances rarely occur in the real world and actual network performance will always be below these values.  Interference, network congestion, and many other factors will affect the final perceived transmission rate.

Goodput is the user perceived data rate that finally comes out of the system, the effective throughput.  Goodput is the user payload after all the vehicles used for transporting the data are removed and fits the following relationship:

Net Bit Rate ≥ Maximum Throughput ≥ Throughput ≥ Goodput

The rate of goodput is the most highly volatile number, subject to additional factors including operating system protocols and the processing performance particular to given hardware in a network.  Goodput can be empirically measured by timing file transfers of know size:

Goodput (bits/s) = (File Size (Bytes) * 8) / Transfer Time (seconds)

For an application like streamed multimedia, network goodput is the number that must be satisfied to avoid interruption, which might lead to undesirable behaviors like video frame rate stuttering.

Many modern network systems have Quality of Service protocols built in to prioritize traffic and minimize transmission problems for lag sensitive applications.  Services like IPTV make use of such systems to provide uninterrupted video streams.  QoS can prioritize video traffic, but for someone looking to stream personal video files through a home network, the network still must have the capacity to successfully transmit enough data to satisfy the demand placed upon it by the video plus any other likely, simultaneous uses.

Jargon used throughout various sources of real world performance numbers for networking gear are inconsistent, coming from marketing, product reviews, and actual, credible scientific testing.  Terms like typical throughput are the most likely correlation for goodput, as mentioned above, but vary for the specifics of each network.  Ultimately, most of the product testing is scientifically informal and reviewers are not measuring internal data rates, they are simply measuring what comes out at the end, what a user might actually see.  This is not necessarily the same number one will get after installing it in their own home, but it should be similar in all but the worst of circumstances.

Home Theater Bandwidth and Network Sizing

Next, we should discuss what kind of bandwidth would be required to shunt A/V signals around a network for home theater purposes.  Obviously, there are varying levels of quality, but the necessary bandwidth to stream A/V content will of course depend upon native resolution and the amount of compression as well as software specifics such as file types and codecs.

To give the reader an idea of the amount of bandwidth required by high definition media, we will look at various methods by which video can currently be delivered:

Various A/V Bit Rates

Digital

Broadcast

Broadcast

Codec

Maximum

Frame

Gross

A/V

Source

Modulation

Bandwidth

 

Resolution

Rates

Bitrate

Bitrate

 

 

(MHz)

 

(pixel width)

 (Hz)

(Mbits/s)

(Mbits/s)

Terrestrial ATSC

8VSB

6

MPEG-2

1080i/720p

30p/60i

32.00

19.39

Digital Cable

256-QAM

6

MPEG-2

1080i/720p

30p/60i

64.00

38.78

DVD

-

-

MPEG-2

480i

60i

11.08

10.08

HD-DVD

-

-

MPEG-4/VC-1

1080p

30p/60i

36.55

30.24

Blu-ray

-

-

MPEG-4/VC-1

1080p

30p/60i

53.95

48.00

Flash Video (LQ)

-

-

MPEG-4

240p

30p

0.34

0.29

Flash Video (HQ)

-

-

MPEG-4

1080p

30p

22.08

18.40

From low quality flash video to high-end video sources, higher resolution and lower compression quickly add to increase bandwidth.  Broadcast HDTV bandwidth will need a minimum of 20 Mbits/s per stream.  Any video that approaches the current gold standard for video quality, Blu-ray, will require a bandwidth of 48 Mbits/s just for the actual A/V signal in addition to necessary signal protocol overhead.  These numbers represent the required goodput for streaming through a networked A/V system in real time without problems.

One complication in the above is the common practice with cable television and other paid, non-terrestrial service providers to squeeze in additional subchannels into each 6 MHz designated ATSC channel width, which are remapped to virtual channels in a manor transparent to subscribers.

Common Digital Subchannel Schemes

HDTV Channels

Bitrate

Subchannels

Subchannel Type

Bitrate

 

(Mbits/s)

 

 

(Mbits/s)

1 x 1080i or 720p HDTV

19

0

No additional subchannels

0

1 x 1080i or 720p HDTV

15

1

480p or 480i SD subchannel

3.8

1 x 1080i or 720p HDTV

11

1

720p HDTV subchannel

8.0

1 x 1080i or 720p HDTV

11

2

480p or 480i SD subchannels

3.8

1 x 720p HDTV channel

8.0

3

480p or 480i SD subchannels

3.8

2 x 720p HDTV channels

9.6

0

No SD subchannels

0

2 x 720p HDTV channels

7.8

1

480p or 480i SD subchannel

3.8

No HDTV channels

0

2

480p or 480i SD subchannels

6.0

No HDTV channels

0

3

480p or 480i SD subchannels

6.0

No HDTV channels

0

4

480p or 480i SD subchannels

4.2

No HDTV channels

0

5

480p or 480i SD subchannels

3.8

No HDTV channels

0

6

480p or 480i SD subchannels

3.1

No HDTV channels

0

7

480p or 480i SD subchannels

2.7

No HDTV channels

0

120

Radio/audio subchannels

0.2

From the table above we can see that digital television bit rates can vary widely, and there is every reason to believe that some carriers will and do push to squeeze in as many subchannels as they can.  However, accommodation for contingencies requires that the maximum bit rate be used as the basis for selecting an appropriate network technology when sizing it for throughput.

The final issue to consider when sizing network capacity for streaming multimedia is what other network/internet connectivity must be maintained for simultaneous use.  Add up the numbers for all the different uses that need to be accommodated, and keep in mine, sizing only 20 Mbits/s to stream to a single HDTV is great if no one else in the house intends on surfing the net and downloading files at the same time.

The Cons of Wireless

For all its convenience, Wi-Fi does have some distinct drawbacks, and we will have a look at these issues to form a clear picture of the pros and cons of alternative networking methods:

  • Security
  • Transmission rate
  • Reliability

A significant problem for Wi-Fi is security, which requires proper network configuration to maintain, but neglect does not preclude using the network.  It’s all too common to get the network running but leave it wide open to connection by anyone within broadcast range, a potential security hole into any personal information stored on the network.  Improperly configured Wi-Fi is not only wide open but it will broadcast the open connection SSID to all within earshot if not disabled.  Disabling SSID broadcast is a first step, but it not an actual method of security, a proper encryption method is needed, but they are not all equally secure.  Older decryption methods such as WEP are easily hacked, and even the more recent WPA can be cracked by brute force.  Using the newer, more secure standard WPA2 is recommended but every networked device must support compatible standards, potentially forcing either upgrades or dependence on the lowest common denominator security protocol supported by all the devices.

For those who do choose to go the wireless route, Ars Technica has a good article on securing a wireless network:

Data transmission is another potential limit for using wireless for transferring high bit rate digital A/V signals.  Theoretical data transmission rates of a wireless connection are always lower than the contemporary wired solutions.  Wi-Fi equipment vendors are also reluctant to advertise that typical real world throughputs are only 20% to 50% of the maximum, preferring to advertise the theoretical maximum rate:

Wi-Fi Performance and Standards

802.11

Release

Operating

Gross

Net

Typical

Performance

Operation Radius

Protocol

Date

Frequency

Bitrate

Bitrate

Throughput

Efficiency

Indoors

Outdoors

 

 

(GHz)

(Mbits/s)

(Mbit/s)

(Mbit/s)

(%)

(m)

(m)

Jun-97

2.4

-

2

0.9

45.0

20

100

a

Sep-99

5

72

54

23

42.6

35

120

b

Sep-99

2.4

-

11

4.3

39.1

38

140

g

3-Jun

2.4

128

54

19

35.2

38

140

n

~ Nov 2009

2.4, 5

-

600

130

21.7

70

250

For wireless transmission, the difference in maximum theoretical rate and actual rate are due to protocol overhead and error correction/retransmission.  Anything that causes signal attenuation, distance and physical obstructions, will increase overhead for error correction, slowing useable data rates as a function of the signal to noise ratio with higher bit rate transmissions being more susceptible to noise.  Keep in mind that the typical data rates above are generalized and they will vary with the particulars of any given installation.

Currently, the fastest available Wi-Fi solution is a moving target, the draft IEEE 802.11n standard, which has not been officially adopted and currently is at draft 2.0.  While products are available that use the draft standards, there is no guarantee that devices based on either draft 1.0 or 2.0 will be completely compatible with the final standard.  Some incompatible devices may only require a firmware upgrade, but some will be physically incompatible with the final standard due to hardware differences.

Ethernet bit rates also drop due to signal attenuation decreasing signal to noise ratio, but typical throughputs hover around the 70% range and tests at conducted at Los Alamos of the latest 10 Gigabit Ethernet standard achieved throughputs in excess of 7 Gbits/s, consistent with that mark.  Based on an approximate 70% net bit rate we have the following:

Ethernet Standards and Performance

IEEE 802.3

Release

Name

Cable Type

Net

Typical

Range

Standard

 

 

 

Bitrate

Throughput

 

 

Date

 

 

(Mbits/s)

(Mbits/s)

(m)

Ethernet

1972

 

Coax

2.94

2.06

 

Ethernet II

1982

 

Thin Coax

10.0

7.00

 

802.3

1983

10BASE5

Thick Coax

10.0

7.00

 

802.3a

1985

10BASE2

Thin Coax

10.0

7.00

 

802.3i

1990

10BASE-T

Twisted Pair

10.0

7.00

100

802.3u

1995

100BASE-T

Twisted Pair

100

70.0

100

802.3ab

1999

1000BASE-T

Twisted Pair

1000

700

100

802.3an

2006

10GBASE-T

Twisted Pair

10000

7000

100

802.3ba

~ Jun 2010

100GBASE-T

Optical Fiber

100000

70000

100+

Then there are also problems with connection reliability, uneven coverage, and signal interference that can lead to intermittent connections, and further reduction in data rates.  While indoor Wi-Fi ranges typically still exceed the dimensions of most dwellings, it is reduced not just by the presence of floors and walls but also by the relative angle of the obstructions to the signal path.  When devices are separated by an obstruction at a shallow angle the apparent thickness of the obstruction is greater and may cause a loss in range and data rate great enough to leave the extremities of a home without usable signal.  Also, a significant number of other devices operate in the 2.4 GHz Wi-Fi band that can cause interference and congestion: cordless phones, baby monitors, microwave ovens, and Bluetooth, not to mention the neighbor's Wi-Fi network.  Interference produces signal masking where relative signal strengths are similar, decreasing the signal to noise ratio, and increasing data reception errors.  So, sit down and watch a favorite movie streamed from a media server and pop some popcorn; have fun.

Based on the throughput capacities provided here and the throughput demands provided in the section above, any Wi-Fi standard below draft n will likely run into capacity trouble just to stream a single HD channel and allows minimal to no overhead for any other network transmissions or internet connections.  However, higher bit rate Wi-Fi solutions such as draft n are also be more sensitive to interference and other less than ideal conditions.  Ethernet solutions have the necessary throughput to not bog down or overload the entire network to run HD video, but unless a home is pre-wired, building such a network will be a painful endeavor. 

Ethernet over Existing Wiring

Ethernet is ubiquitous: one will be hard pressed to buy a computer at present that does not have an 8P8C (improperly but commonly referred to as RJ-45) modular connector jack built into the machine and PCI based add in adaptor cards are plentiful and cheap for older machines.  The ubiquity is so widespread that it is also the standard connection for the majority of internet/network enabled AV gear.  Ultimately, even if one uses Wi-Fi for the network, it has to be converted back to an Ethernet protocol before any device can be plugged in.

Numerous options that use existing home wiring systems to interface with Ethernet are available at present.  If one just happens to have some telephone lines, coaxial cable, or even standard electrical wiring running through their home, they can make use of these options very easily with the appropriate hardware.

The various methods for repurposing existing home wiring as a network involve embedding, or piggybacking, another signal onto the primary transmission in the wiring that operates at non-interfering frequencies.  Each home wiring network technology runs its signal at frequencies above those that the primary wiring system uses but they will not always be cross-compatible with other technologies that also make secondary use of existing wiring.

Home Wiring Primary Operating Frequencies

Wiring

Operating Frequency

System

Low

High

Power Dist (Hz)

60

60

Telephone (Hz)

300

3400

Cable TV (MHz)

7

1001.75

The primary advantage of using a wired system is higher typical throughput than is available with wireless and more reliable connections.  The latest iterations of these alternate schemes have been pushed by triple play television service providers and most, if not all, have QoS mechanisms built into the specifications to provide seamless service.  The other advantage is security: anyone looking for unauthorized access to a wired network will have to physically access the network wires to tap them and not just merely be in range.

The primary disadvantage is hardware availability.  Wi-Fi is plentiful and available everywhere, as close as the local Walmart.  Most of these alternate solutions are not as readily available at brick and mortar stores at present, but with a bit of looking on the Internet, they can be found and are many are manufactured by the same companies familiar to the Wi-Fi scene.

Confused about what AV Gear to buy or how to set it up? Join our Exclusive Audioholics E-Book Membership Program!

 

About the author:

Professionally, David engineers building structures. He is also a musician and audio enthusiast. David gives his perspective about loudspeakers and complex audio topics from his mechanical engineering and HAA Certified Level I training.

View full profile

Recent Forum Posts:

DavidW posts on August 20, 2009 17:30
morrone, post: 612345
Well, I think you are missing something more basic. When the signal strength is low due to background noise, obstacles, distance, etc. the the receiver will tell the transmitter to transmit at a slower speed. The major impediment to speed is usually signal attenuation, not protocol overhead or error correction/retransmission.

I think this is nit-picking semantics.

The statement I made does not assign any proportion to various components of a signal, it simply attempts to list them. The error correction coding is a fixed ratio of any transmission or retransmission. Signal attenuation, in and of itself, leads to higher error in transmission and when significant errors are detected, retransmission is required. This is how signal attenuation affects user perceived transmission speeds.

Transceivers that are designed to drop to a lower transmission rate do so because lower bandwidth signals are typically more robust, meaning a higher percentage of the transmitted data gets through without significant errors that require retransmission. At poor enough SNR, the performance of the lower bandwidth signal can exceed that of the higher bandwidth signal which is doing more retransmitting.

Good design practice suggests that the signaling method that yields the highest actual throughput should be selected, even if it has lower theoretical throughput. In the reference section, I included a link to a paper that examines selecting various modulations to maximize throughput for variable SNR.

morrone, post: 612345
I think you misunderstand the overhead issue for ethernet too. Signal attenuation is basically a non-issue for 10/100/1000 base-T is you stay within state cable length limits. Unless there is something seriously wrong with your ethernet cable, error correction and ethernet protocol are pretty insignificant overheads. You should be able to get 95% of stated bandwidth pretty easily.

If you are only getting 70% through 10/100/1000, then it is a combination of MTU size, IP overhead, TCP overhead, OS network stack performance, and application performance. It is really not too difficult to get 90% of ethernet line rates at the application level with proper software design.

We can get about 90% of 10-GigE rates. At those higher speeds, having a fast enough bus on your motherboard and fast enough cpu and well designed application network code are far more significant factors. Ethernet overhead is basically a non-issue.

I think you are significantly overstating the ethernet overhead issue.

I never said that signal attenuation was a significant culprit in reduced Ethernet throughput, and the 70% value is likely conservative and not necessarily the value I would expect to get under all conditions; it was intended as a floor for performance.

The number is based on several sources including a Dell white paper that quoted results from a 2003 Los Alamos test of a 10GbE connection. It is fairly likely that performance has improved over the intervening years, but my intent was to provide a conservative number for goodput that had published backing.

The 70% number also likely includes issues such as data collision on congested networks under high utilization and bottle necking.

With prices for 10 GbE equipment at Newegg in the four figure range, I imagine few consumers have 10 GbE equipment yet and likely do not have a well designed professional network that avoids issues that do slow throughput over Ethernet.

The intent is to make sure no one over expects performance on a wide range of possible configurations.
ivseenbetter posts on August 20, 2009 12:40
This is a good article. It definitely raises some good points and brings these potential solutions to light for folks who may be interested. I had looked into this a few years back and the limitations were such that it didn't make sense to utilize it. However, with the info that is provided here it sounds like things are changed. I'll definitely take another look at these solutions.
morrone posts on August 19, 2009 18:13
Corrections

“For wireless transmission, the difference in maximum theoretical rate and actual rate are due to protocol overhead and error correction/retransmission. Anything that causes signal attenuation, distance and physical obstructions, will increase overhead for error correction, slowing useable data rates as a function of the signal to noise ratio with higher bit rate transmissions being more susceptible to noise.”

Well, I think you are missing something more basic. When the signal strength is low due to background noise, obstacles, distance, etc. the the receiver will tell the transmitter to transmit at a slower speed. The major impediment to speed is usually signal attenuation, not protocol overhead or error correction/retransmission.

I think you misunderstand the overhead issue for ethernet too. Signal attenuation is basically a non-issue for 10/100/1000 base-T is you stay within state cable length limits. Unless there is something seriously wrong with your ethernet cable, error correction and ethernet protocol are pretty insignificant overheads. You should be able to get 95% of stated bandwidth pretty easily.

If you are only getting 70% through 10/100/1000, then it is a combination of MTU size, IP overhead, TCP overhead, OS network stack performance, and application performance. It is really not too difficult to get 90% of ethernet line rates at the application level with proper software design.

We can get about 90% of 10-GigE rates. At those higher speeds, having a fast enough bus on your motherboard and fast enough cpu and well designed application network code are far more significant factors. Ethernet overhead is basically a non-issue.

I think you are significantly overstating the ethernet overhead issue.
davidtwotrees posts on August 17, 2009 13:11
Excellent, informative article! I'm not a geek and tend to muddle through all things techincal, and I picked up on most of the article's points. I tried wifi in my concrete shell apartment and it was terrible. I currently use a router with wired connections from my pc to my streaming blu ray player (samsung 2550), and another to my media server (escient fireball se80).
Post Reply