43 posts • joined 2 Jul 2009
Re: Obvious need for..
> if you want widespread adoption of a new technology it needs to be backwards compatible, because people. As a prior example see the roll-out of colour television, where the signal was backwards compatible with black and white sets...
For most 525-line and 625-line systems, backwards compatibility wasn't difficult. In those systems, adding color was nothing more than adding supplemental information within the payload. Almost everything else about the broadcast remained the same. For terrestrial broadcasts, you also had the benefit of there being no intermediary devices between the transmitter and receiver that had to be color-aware.
But for a backwards compatible IPv6 system, things become very difficult because you need end-to-end awareness for it to work. What were to happen if the extra address length was added as an IPv4 header option and it passed across a router that wasn't IPv6-aware? That router could strip the information, resulting in a misdirected packet.
Sometimes, passive backwards compatibility just isn't possible. Just look at the "dual-stack" period that Ireland and the UK had while it supported both the newer UHF 625-line and older VHF 405-line systems. Between the incompatible frequencies, channel width, and timing, the only way to bridge the two standards was through the use of translator stations that converted 625-line content down to 405-line signals for older televisions.
In the case of IPv6, proxies and network address translation devices act as a equivalent to those translator stations. Sure, it adds complexity in some ways, but it also simplifies things in other ways.
Re: "the world is clinging stubbornly to IPv4"
> Their solution was to just drop IPv4 altogether and only have a bunch of edge servers speak dual-stack.
The large finance company I work for is going down the same road. We had some struggles during the last corporate merger due to significant overlap in RFC 1918 network ranges. We also have to use source and destination masquerading on our private tunnels running to third parties due to similar range overlap.
Our long term goal is to have our perimeter firewalls and load-balancers running dual-stack so they can perform NAT-4to6, while having everything else behind them running native IPv6.
Besides eliminating future potential merger issues, the company is motivated to complete this migration so that they can sell their sizable holding of IPv4 addresses while they still have significant value.
On the plus side, this company did an extraordinary job disabling access. I've worked at a few places where enabled accounts for long gone employees was commonplace. On the negative side, the system should have escalated the issue to a secondary or tertiary individual when the contractor was eligible for renewal but their paperwork hadn't been finalized. Also, it sounds as if the system lacks a verbose auditing system for management when it does axe a worker. Time to generate some preventative action item reports...
Re: I dont really understand why i need 5g at all
> I believe there are considerations in high density areas - 5G is shorter ranged using more, smaller base-stations, and these will have less users on each one.
That depends on the frequency being used. The new 5G NR protocol includes a number of new channels above 2 GHz that will be used for micro-cells and pico-cells. But it also supports all of the channels in use today. It isn't unlike how UMTS/HSPA and LTE each gained access to channels that were either unused or experimental during the previous generation.
Expect to see carriers reallocate most, if not all, of their older 2G and 3G channels for 5G in the next few years. So anything from 450 MHz to 60 GHz could be used.
> Other than that, I agree that 4G is fast enough for the foreseeable future
While you may not need the extra bandwidth, one major advantage to shorter transmission times is that radios can go back to sleep sooner, which helps with power management.
Also, the new 5G NR protocol is designed to help cellular carriers better compete with traditional WISPs. It offers lower latency than LTE, better channel/tower aggregation support, and the possibility of switching from FDD to TDD, which allows the network to better handle changing asynchronous loads.
Re: Data smugglers, look at the back of the PC
Unlikely. Instead of relying on the honor system, they'll probably roll out an enforcement agent to all of their systems (if they haven't already). Such an agent would probably block any untrusted drive, not just hot-swappable ones.
My employer uses software like this. Users are authenticated via a bootloader before either Windows or MacOS starts. The rest of the drive is encrypted. If there are any unencrypted drives or partitions, they are never allowed to mount.
If I attach a removable USB, Firewire, or eSATA drive, or if I insert a disc into an optical drive, the agent first checks if I have removable media rights. If I do, it next checks if it is encrypted or not. If it is encrypted with my PC's key, it'll mount. If it is encrypted with another PC's key, it performs a rights check against that PC and mounts if I have access. If it is not encrypted, it'll prompt me to securely format it if it is a writable medium. If all that fails, the media is ignored.
Re: "Taking over two minutes to load a 64 kilobyte into memory was maddening."
It shouldn't have taken that long. According to "Commodore - A Company on the Edge", Robert Russel considered the disk access on the VIC-20's serial bus to be too slow. So for the C64, they replaced the 6522 VIA (with its broken shift register) with a 6526 CIA and added some high speed lines to the serial port. Same deal with the 1541. It would have been 20 to 30 times faster than the VIC-20.
Problem was, during the period when prototype boards were being reworked into their final production layout, somebody removed the high speed lines. The decision to reuse VIC-20 cases for the C64 imposed some serious space challenges. Some engineer mistakenly thought they weren't being used, so they were omitted to save space on the board. Since the Kernal's serial routines hadn't yet been updated to utilize the new burst mode, nobody realized the mistake until several hundred thousand motherboards had been manufactured. By then, it was too late.
When marketing said that the 1541 also had to be backwards compatible with the VIC-20, that limited how the serial routines could be written even further. There was also a looming deadline. So Bob Russell wrote the routines over a weekend to stay on schedule. They worked, but they sucked. Programs such as Epyx Fastload and CMD JiffyDOS worked by replacing the serial routines with optimized versions.
Anyone who has used a 1571 or 1581 on a C128 knows how much faster they are. That's what the 1541 on the C64 was supposed to be like. But due to a few bad decisions, the 1541 ended up dirt slow.
I also love everything about my GS5 Neo, save the lack of voice-over-LTE support. I hate watching my signal strength drop from four bars to one bar when a voice call comes in.
The Galaxy A8+ 2018 wouldn't be a bad replacement if my GS5 died, but I'd miss the removable battery.
I would think that personalized service at a restaurant would be more subtle. Instead of knowing the food and drink you want to order, it would know the food and drink that you'd never want to order. Have a distaste for cilantro, hot spices, and red wines that are high in tannins? We'll move those foods to the back of the menu.
The only place I frequent where such a simple customer preference system might work is my barbershop, given that I tend to have the same haircut for half the year.
> Not "possibly". That is the common practice between the HTTPS-terminating load balancers and the back-end web servers.
Not necessarily. If the company has policies that prohibit confidential data from being on the wire unencrypted, then you cannot run HTTP between the load-balancer and web servers. This is sometimes mandated by B2B contractual agreements and cannot be avoided.
Luckily, many load-balancers these days allow you to decrypt HTTPS traffic, inspect and modify the HTTP payload, and then re-encrypt it back to HTTPS before sending it onto the web servers. This allows you to use session stickiness features while still keeping traffic encrypted on the wire.
Re: what's the frequency kenneth?
Both. One report I read classified 5G Next Radio bands as such: under 1 GHz are "low", 1 to 6 GHz are "middle", and above 6 GHz are "high".
All of the 5G NR frequency charts I've seen so far only define bands in the low and mid ranges. Most of the bands are a subset of current LTE bands.
It appears as if some the first 5G deployments might be on bands used for 2G and 3G services today. A few carriers have stated that they're retiring those services starting next year. Meanwhile, 5G telco equipment is supposed to roll out around the same time, so the timelines make sense.
But those deployments may also be on newly acquired bands between 2 and 5 GHz. Both the FCC and Ofcom have been busy auctioning off those bits of spectrum. The carriers will probably want to put it all to good use.
Split? Too late...
The bill ... ensures that the US doesn't repeat the CDMA/GSM standard-split where the US went one direction and the rest of the world went another.
Like what we're seeing with frequency allocations on the 700 MHz band? It appears that much of the world (including much of Latin America) will adopt the APT plan, allocating two contiguous 45 MHz blocks for each uplink and downlink, while the US and Canada continues with their weird mish-mosh of upper and lower blocks.
This is basically opening a can of worms.
That's an understatement. Many people with corporate laptops and smartphones that tunnel back to the office often switch between fixed wired and mobile wireless ISPs. How is that handled? Is the fee per person, per ISP, per account, or per source address? I imagine that a business with thousands of Blackberries tunneling home would be upset if they had to pay a fee per device to cover their wireless service.
Re: What really narks
> are sites that ... pop up passive aggressive messages say “We notice you’re ad-blocking."
Install uBlock Origin. Right click on banner. Choose "Block Element" from the menu. Rejoice.
> America is 11th per 100,000 deaths by firearms
About half of those deaths are suicides. Some countries do not count suicides by firearm as homicides like the US does. So it can be difficult to compare.
That said, this tragedy touches upon a couple of ugly issues. First, nobody can have an honest conversation about gun law in the US because so many of its politicians have been bought and paid for via campaign contributions from gun manufacturers and their advocacy groups, such as the NRA. They will not bite the hand that bribes, err, feeds them. Nothing is going to change until election finance law changes, and that won't happen until corporate personhood is revoked.
Second, many people in America see these shootings as acceptable losses. When you exclude homicides in poor urban areas and suicides, the homicide rate by firearms in the US drops to rates similar for Eastern Europe. They fear that without arsenals, Obama and his henchmen might show up to their farm and seize it for redistribution to poor blacks and illegal immigrants. Or they'll show up in tanks and burn them out, like Waco or Ruby Ridge. Or some other moonbeam fantasy that AM talk radio hosts dream up to keep their listeners paranoid and tuned in.
If any legislative changes come out of this shooting, it'll be to get more guns into schools, not fewer.
Not a big deal
Who is this going to affect in the near term? Most home users I know are happy running an older version of Office. Microsoft talks about the "pace of change accelerating", but the features most home users care about are fairly static. I know more than a few people who still run Office 2003.
Business users might care more for the new collaboration tools in recent versions of Office, and therefore a greater need to be current, but they tend to be on a different lifecycle model than your typical home user. When my Thinkpad gets replaced every few years, I get a new version of Windows and Office.
Re: Roku F-u
Roku won't fix the WiFi Direct issue because they see it as a feature, not a bug. Luckily, there is an option to disable it buried deep in the settings. Unfortunately, my Roku ignores the option. So I had to fiddle with the power and interference settings to reduce the WiFi Direct transmit power instead.
Re: everyone replaces their PCs
Faxes are still used in real estate by older agents who shun PDFs and e-signing. Nobody wants to lose a deal by telling them to get with the times, so you just put up with them. That's why my spouse has a multifunction printer set to fax mode connected to our landline.
Re: Makes sense now, but what about the future?
It appears that most OSes will allow you to disable these security options via a registry key or boot loader option.
> But the control centre for my nuclear reactor only works on XP
Windows Embedded 2009 uses the NT 5.1 kernel and has extended support through 2019. I wonder if Microsoft has enough customers with support contracts that they'll be pressured into back-porting this fix to that old code tree.
Re: Will 52 ESR continue working?
The problem with forking a newer version of Firefox to work with XP is that programmers would have to tweak their patches with every new release. That might be more effort than it is worth.
Maybe a better solution would be to create a set of patches for XP that implement the new Windows 7 functions with XP's kernel32, user32, shell32, and related libraries, similar to how KernelEx extends 98/ME.
The US banking system is almost entirely electronic now. Cheques are little more than legacy forms for initiating an ACH electronic transfer. The cheques are scanned, run through an OCR program, and then immediately submitted for transfer. Most banks and credit unions provide desktop deposit apps for scanning cheques at home or office. This is in addition to all of the paperless electronic funds transfer systems in use.
If somebody asked for a cheque, either they were new or lazy. Banks still offer old-fashioned wire transfers. International remittance services handle smaller transfers. Foreign exchange brokers handle larger transfers. Point-of-sales card networks can also be used to transfer money internationally.
Internet speeds in the US aren't far behind Europe, especially in urban areas. The real problem is that the cost of that service is significantly higher in the US.
Most VHF band II radio stations in the US now simulcast in both analog FM and digital NRSC-5 ("HD Radio"). Unlike the European DAB standard, NRSC can transmit in-channel, so there is less pressure to shut down the old analog system. Too bad that most commercial radio stations in the US are no match for streaming services.
The problem with those stories about US infrastructure crumbling is that they are often overblown. A bridge might be rated structurally deficient if the shoulders don't meet current code. And while Oklahoma and Kansas allow their roads to rot, other states do not. It is like saying that the roads in the EU suck because you use southern Italy as the benchmark.
Re: additional battery capacity
This might be a good time to disable updates, if you can.
I don't believe you can unless you sabotage the cellular modem in your car. That will most likely invalidate your car's warranty and you'll be stuck with the "car needs service" warning being displayed forever.
Re: Trade War
> The US doesn't even manufacture memory chips anymore.
My understanding is that there are fabs in Arizona, California, Oregon, Virginia, and Utah that are either producing current generation memory chips or could be quickly reconfigured to do so. If you include older generation chips used by embedded devices, the list goes up.
The bigger question is where end-device manufacturing would go in the event of a trade war. Would it come back to North America or would it go to other Asian countries like Taiwan, S. Korea, or Japan? Ideally, you want to manufacture your chips close to your customers.
Re: Router fault?
Another article mentioned that Comcast had the MAC address of another customer's router associated with this guy's account. So it was a fat finger issue, not a router issue.
Re: Remember OS/2 Warp
Nobody bothered porting to OS/2 because its market share was so small and because the native API didn't offer much of an advantage over that of WinAPI. If OS/2 was more popular and if it included some must-have OS calls, vendors would have wrote native apps.
A good number F5 boxes around the world started falling down yesterday. Turns out that some of the RH Enterprise Linux source trees they use didn't have the leap second patch applied. No customer impact in our shop since we deploy our F5 boxes in redundant pairs, but a good number of underpants were soiled during the workday.
Time for HTTP-ES
Another option would be for HTTP to follow the footsteps of FTP and introduce an explicit mode that allows clients to optionally step up to TLS mode over the native port and to allow mixed content.
With FTP-ES, a client connects to a FTP server on port 21 using clear-text. The client can then request to enter secure TLS mode. If the server doesn't support it, the client can abort or continue. Likewise, if the client doesn't support FTP-ES, the server can restrict access. The protocol also allows granular encryption of the control channel, the data channel or both.
In a theoretical HTTP-ES, a new HTTP-4xx code could be introduced that requires clients to step-up to TLS mode. It could require granular encryption of cookies, all headers, payloads or everything. Likewise, clients could automatically request TLS if it is trying to present a secure cookie to the server or if the user prefers it. It could also provide a CRC of the payload in the encrypted portion to guard against MitM tampering.
The benefits of HTTP-ES would be: no broken bookmarks, lower overhead when all you need is cookie or header obfuscation, increased protection against MitM attacks and some compatibility with external cache servers.
Is this a real 10GbE link, like 10GBASE-SR or 10GBASE-T, or is it a time-sliced link, like 10GBASE-PR? Last I checked, you can take a EPON cable from the CO and splice it several hundred times.
>> Both technologies are said to improve colours and black levels - an obvious target perhaps, since many consumers have previously enjoyed the pictures from plasma televisions.
Quantum dots arrived to the market just at the right time. You cannot shrink the cells in a plasma monitor down to the size needed by 2160p without exceeding EU and California power regulations, which is why UHD plasma monitors will never advance beyond a few prototype models.
>> Its good see that the big players are looking at more than just resolution, and are competing on black levels and colour accuracy.
The new UHD specification (Rec. 2020) includes more than just increases to pixel resolutions. The specification calls for color bit depths of either 10 or 12 bits and a color gamut that is over twice the coverage of HD.
I'd argue that this is really going to drive the adoption of quantum dot LCD, OLED and laser display technologies over traditional LCD (which can barely handle the current HD color gamut), especially for smaller monitors that can't take advantage of 2160p resolutions. When you throw Avatar on a wide gamut monitor, people are going to be absolutely blown away by the color.
The main issue I see with image matching is that the Captcha folks will need to keep an image repository that is either large and/or dynamic enough that people can't just run through the test a bunch of times, saving the results for a bot to use.
Sure, Google could just grab a few million cat photos from their image search repository, but what is the legality of that? A legal set might be much smaller.
Also, there is a danger in using animals for the captcha. Image recognition software for people has become very good. It wouldn't be terribly difficult for a spam gang to enhance it to where it can tell a cat from a horse.
Re: It sucks but..
Agreed. As long as you stay under the radar, you will be of little interest to many hackers and government spooks. Security through obscurity.
But as many of us in the I.T. world know, S.T.O. is really a poor long-term method for keeping things safe. Eventually, somebody is going to cast a really wide net just to see what they can catch. Anyone could become a victim.
Which brings us to victim blaming. The idealists in this world condemn the practice, suggesting that people shouldn't be forced to alter their behavior because of thugs and criminals. The realists in the world condone the practice, suggesting that people should use common sense in dangerous situations because there will always be thugs and criminals who are looking for any excuse to act.
I tend to walk the middle of the road in that debate. We shouldn't have to lock our doors at night. But I do so anyways because I know we'll never be able to stop all of the crooks in the world. Likewise, we should be able to upload compromising personal photos to any private location. But I keep mine in personal cold storage because we'll never be able to stop all of the hackers in the world. Maybe that'll change some day, but I'm not willing to risk it now.
Re: Digital Clone
Actually, the Reed–Solomon coding used in audio CDs provides 8 bytes of parity data for every 32 byte audio frame. What is missing in audio CDs is a cyclic redundancy check (CRC), which is more robust and could be used to detect rare situations where the simpler parity check fails.
And the whole idea that the audio on a CD-R sounded different than a traditional CD is mostly garbage. The payload of the data frames will be exactly the same between both types of media, so the decoded audio should be exactly the same. The only difference comes from the higher number of read errors that a player will encounter with a CD-R, necessitating a trip to the imperfect error correction routine. People who thought they were "sharper" were fooling themselves.
Re: Self inflicted
> But I do wonder if all he really achieved was making himself more fussy.
Ignorance can be bliss when it comes to low quality music. When you train your ears and mind to pinpoint compression artifacts, you can't turn it off. Suddenly, all of those 128 kbps MP3 audio files you grabbed from Napster in the 1990s are garbage to your ears.
> maybe it became more of a technical exercise than anything else
Fixation with perfection can be a really bad thing. Reminds me of the chase that Japanese radio manufacturers have with distortion levels. Most people can't hear levels below 1%, yet during the '80s and '90s, the average THD for a receiver dropped into the hundredth and thousandths of a percent. Great on paper, but was it worthwhile?
Re: Hey Microsoft: You want to keep hardware partners onside?
the hardware vendors would have no particular loyalty to Microsoft either
Agreed. If eComStation (OS/2) was revamped to a point that it was competitive or superior to Windows, most x86 system builders would start offering it.
But even in a market where the hardware and software are designed by one company, we have seen healthy 3rd party manufacturers. Just look at the history of Apple II and Macintosh clones (before they were sued). If people think that a dollar, euro or yen is to be made, they'll do it.
I guess what Google needs to do is either get more vendors shipping vanilla builds that Google will manage the over the air updates for or split the system partition up a bit so that vendors can add their junk in there but google can offer partial OTA updates for vital security updates.
Another option would be for Google to keep a legacy kernel interface and driver model in newer versions of Android. One commonly cited reason for so many handsets being left on the 2.3 Gingerbread is because so many things changed in the kernel in 4.0 Ice Cream.
I've heavily utilized the legacy driver support in Windows over the decades, and it has saved me from throwing out an older machine or peripheral card more than once. I have an older laptop running W7 with shimmed XP video drivers (XDDR) in the kids' room right now. That's a great thing for Microsoft to have left in.
UHD over the air
I'm actually less excited about 2160p as I am about the next over the air standard. This side of the pond, we're still using H.262 for our HD transmissions. That usually means a single 720p or 1080i main channel with a couple of 480i subchannels sharing the 19Mbps stream.
A switch from H.262 to H.265 along with a switch from 8VSB (19Mbps) to 16VSB (38Mbps) might finally mean the end of SD subchannels. I'd be quite pleased if everything was 720p or higher.
Re: blame linux?
While Google is free to make a hard fork of Linux at any point in order to keep the Android kernel API stable, doing so would inevitability cause drift to be introduced. Depending on how far that drift goes, it could make importing new features from the main Linux tree very laborious. One of the main points of using Linux is that you get to easily ride the coattails of other peoples' work.
Re: blame linux?
There have been a number of articles that suggest that the Linux kernel API is volatile by design. Part of it comes from Linux still having a foot in the research and software theory world. Part of it comes from a passive-aggressive desire to eliminate private code trees. I suspect that as long as Android is based on Linux, driver compatibility will continue to be a problem, just as it is with private binary blobs on the desktop.
Makes me wonder if they would have been better off if they had used FreeBSD instead of Linux.
Issues with glare with newer plasma screens
One issue with plasma televisions in the North American market is that manufacturers have switched to phosphors with a very short decay time. While this is great at preventing ghosting and other color issues when transitioning from very bright to very dark scenes, it comes at the price of having increased flicker. It is not unlike looking at an early 1990s VGA monitor with a low refresh rate. It gives a lot of people eye strain and headaches.
In countries with 50Hz mains, higher end CRT and plasma televisions have avoided this issue by doubling the refresh rate to 100Hz. But such double scan modes don't seem to exist in the 60Hz mains market.
The maddening thing is that 3D plasma televisions do have a 120Hz mode for 3D content, but then they drop back to 60Hz for 2D content. A few accept a 1080p120 2D signal over HDMI, but then you're locked to that input to retain the 120Hz refresh.
And another issue with all plasma televisions is that a few years ago, manufacturers introduced some power saving techniques that resulted in sudden jumps and drops in brightness between scenes. Firmware updates reduced the problem, but with some televisions, it was still noticeable.
My two 42" Panasonic plasmas are only six years old, so they have a lot of life left in them. But I now have a larger living room and would like to replace one with a 50". I brought home a new Pani last year, but had to send it back after just a few days. Between the refresh rate and brightness jumps, I couldn't stand it. Since I can't stand the color rendering of LCD screen, it appears that I am going to hang onto my current sets as long as I can.
What AT&T Wireless Services referred to as "TDMA" was actually the D-AMPS standard (IS-54) developed by Bell Labs. Both it and GSM were 2G cellular technologies that used a time division multiple access scheme (TDMA).
For those of you across the Pond who never heard of it, D-AMPS was a digital extension of the old AMPS analog standard that ran on the 850MHz band. It would have been akin to taking NMT or B-Netz and making GSM backwards in-band compatible with them. They even tweaked it to support SMS texting (IS-136).
AT&T Wireless Services started migrating from D-AMPS to GSM about a year before they were bought out by Cingular. Very few phones from AT&T at the time supported GSM, so the author of the article was lucky to have all that bandwidth to himself. AT&T even offered some funky phones that supported both networks - I owned a Nokia 6340 GAIT phone that supported analog AMPS, digital D-AMPSv2 (IS-136) and GSM.
Great idea for DVB-T, but not so great for ATSC
The DVB-T standard actually allows you to create a mesh network using a bunch of low power transmitters all broadcasting on the same frequency. That's one of the benefits of using QAM. So in Europe, this would be a great idea.
The ATSC standard is a bit different. You can't use a mesh network because the current 8VSB modulation standard doesn't hold up to heavy multipath the same as QAM. That was the cost for having a modulation system that is 30% more efficient to transmit. Now, there is talk of adding additional error correction to the MPEG bitstream via the E-VSB addition to help out against multicast, but nobody really knows how well it'll help.
It might be time for the FCC to consider adding 4/16/64-QAM as alternate modulation standards to ATSC. They can roll it out as a requirement along with MPEG4-AVC/H.264 video compression for their "ATSC v2" standard. Then you can mesh away all you want. Best of all, everyone will get free converter box upgrades again because the wireless companies will have loads of spectrum to buy from Uncle Sam.
It has its uses...
Anonymous Coward: "First we have to have digital TV whether we like it or not. Now digital radio."
Digital television, at least here in the States, was pushed (in part) as a means to fix the adjacent UHF channel issue. Historically, you couldn't have two analog stations close to each other on the UHF band because they would clobber each other. This was due to limits with "SuperHET" tuners and just the general characteristics of AM television transmissions. That's why in North America, UHF channels were often 6 channels apart from each other. Therefore, by switching to DTV, radio licensing authorities were able to consolidate UHF channels, allowing them to sell bandwidth in the upper UHF television band (ch52-69/700-800MHz) for big money. Governments get cash, commercial interests get a new product to sell, and we get new bandwidth for our wireless gadgets.
Meanwhile, digital radio really isn't being pushed by government officials. This is strange because it could be used as a way to radically expand radio channel capacity. Your average wideband FM radio station in the VHF Band II (88-108MHz) uses a 200KHz channel slot. By switching to a digital radio standard that uses 50KHz channel slots, you could get 4x as many stations.
You could also use it as an opportunity to expand the range of radio stations. Here in North America, most of the TV stations dumped the VHF Band I (ch2-6/54-88MHz) for DTV transmissions because of atmosphere reflections and other issues. Imagine having 1,000 radio stations by combing both Band I and II together. Even if your metro area only used a quarter of them to keep adjacent channel interference down, that would be a huge gain over what we have today.
I think one reason why digital radio has sucked up till now is because many have been MPEG-2 layer-2 based. Unless you have a very wide channel, it doesn't leave enough room for both audio and high amounts of error correction. Since the channels were too narrow, you had crappy sound.
Now that we have a newer generation of standards, including HDRadio, DAB+ (Digital Audio Broadcasting Plus), DMB (Digital Multimedia Broadcasting) and DRM (Digital Radio Mondiale), all of which use MPEG-4 HE-AAC (or a proprietary variant of HE-AAC for HDRadio), broadcasters are able to include enough error correction so that a light breeze doesn't cause the signal to fall over.
Problem is, there are too many standards. And there are too many bands for audio. Too many interests with too much money at stake. Every country wants to do it a different way. So, expect that multiband radios will become a fixture in our future if you plan on any sort of international travel, or if you live near a border.