* Posts by laird popkin

4 publicly visible posts • joined 22 Jul 2007

Steve Jobs Flash rant put to the test

laird popkin
FAIL

This test was meaningless

This test misrepresented Jobs' claims, and thus produced irrelevant results.

Jobs said that Flash was responsible for the vast majority of web browser crashes, not Mac OS crashes. This is not just possible, but likely.

Jobs said that Flash was a CPU/resource hog. He said Flash, not just Flash playing h.264 video. It is obvious, but irrelevant, that playing h.264 video efficiently comes down to CODEC performance, which requires hardware acceleration to be CPU efficient. But to support Flash, you have to not only support h.264 video (which is very rarely used in Flash), but the full range of Flash scripted interaction, and of course all of the older CODECs. And that is what consumes CPU and RAM that makes Flash suck battery/CPU/RAM. To disprove this you would need to run a wide range of interactive Flash apps, and try to prove that none of them are slow or bloated. Or you could try to prove that Flash Lite is not only efficient but runs all Flash on the internet. Good luck with that.

Jobs' actual claim was that to support the full range of Flash on the web, the iPad would have to have the CPU and RAM of a desktop computer, at which point it would cost much more and have a much shorter battery life, and would on top of that be less stable (because Flash is unstable). So his interactive media strategy is to support h.264 video, and JavaScript/HTML5, which are well defined and can be optimized to run efficiently and reliably.

BitTorrent net meltdown delayed

laird popkin
Go

uTP (LEDBAT) is a pretty good idea

The imporant thing about uTP is not that it is a UDP-based protocol for moving data. Moving P2P data over UDP not a terribly new idea (My company, Pando, has been doing P2P UDP for years, as have others).

The thing that is interesting about uTP is that it's the first step in an effort to create an open, industry standard (LEDBAT) to move bulk data over UDP in a way that is more manageable by ISPs than TCP. This is a good thing, and is something that I'm extremely supportive of. Interestingly, while some have speculated that this could lead to "net meltdown" the intention of the LEDBAT effort is actually the opposite - to move bulk data over a distinct protocol that allows applications to detect congestion and "back off" so that more time sensitive data can take higher priority, and so that congested routers can make more intelligent decisions than they can with TCP. This should ultimately be great for all involved.

There's a related, parallel effort to optimize p2p traffic by making it more intelligent at the network level, called P4P (http://www.dcia.info/activities/#p4p), and an associated group in the IETF (ALTO). I would invite anyone interested in optimizing p2p traffic to read up on these groups' work - they hold the promise of significantly improving the way that p2p and ISPs work together.

Verizon makes nice with P2P

laird popkin
Happy

P4P is a win-win

Anonymous Coward posts "When you make the API and expect the user-side application to behave, you know what you're calling for... surprise, surpise, the applicaton will ABUSE it! This is a pipe dream..."

The nice thing about P4P is that localizing network traffic is a win-win for both the ISP and the P2P network, so both have a strong incentive to work together.

To illustrate, the test that Pando Networks and Yale ran on Verizon and Telefonica's networks showed that:

- Knowledge of the ISP infrastructure allowed the P2P network to localize network traffic. For example, using random peer assignment,

- Downloading from people near you is much faster than downloading from people at random locations. So P4P downloads were on average 205% faster than P2P downloads.

- Transfers between ISP's dropped by over 50%, meaning that ISP's saved money on external transit (which are a major cost for ISP's), because that data was delivered within the ISP instead.

- Transfers within the ISP were also localized, reduced long distance transit consumption within the ISP network. P2P downloads traversed an average of 5.5 long distance links (e.g. city to city) to get from the seed to the downloader. With P4P, transfers averaged 0.9 long distance links (i.e. consuming much less of the network). To look at it another way, with "random peer" P2P, only 6% of data downloaded within the ISP was from your own metro area. With P4P, that number was a whopping 58%.

Because this is a win-win situation, there's no "abuse", in that nobody "loses" if the other side does too much P4P.

There's more information about P4P at http://www.pandonetworks.com/p4p , and the P4P Working Group at http://www.dcia.info/activities#p4p .

- Laird Popkin, Co-Chair, P4P Working Group (and CTO, Pando Networks)

When 'God Machines' go back to their maker

laird popkin

The iTunes example is exactly wrong

The way that Apple got the labels to make their music available in iTunes was actually to pitch it as a "Mac-only" store, thus a very small risk to the music business. Only after the iTunes store demonstrated viability as a business, was Apple able to convince the labels to license their music to sell on Windows. And the labels are thrilled to license their music to any retailer on the same terms - they have no desire at all for Apple to be the dominant online music retailer. If you could launch a viable competitor to iTunes, the music labels would license you their inventory on the same terms in a heartbeat, just to create more competition in the distribution channels.

If Apple had tried to pitch itself as the "soon to be dominant online music retailer" that would have convinced the music labels to do pretty much anything to keep that from happening. The music industry has plenty of experience with companies (e.g. ClearChannel, BDS) that become dominant and make their lives difficult, so they'd like to avoid that in the future.