Friday, August 20, 2004

What's after Bittorrent?

You know bittorrent, right? It's the way to download huge files using P2P connections. Recently, Microsoft tried to use the torrent concept to distribute it's huge SP2 patch. The concept behind bittorrent is one which is very appealing to me because it's very, very democratic! Let me explain a little more about bittorrent and it will become self explanatory.

First, to use Bittorrent, you need a bittorrent client. The one I like best is called Azareus. Bittorrent uses what are called 'torrent files'. This is basically the what has the details of the whole file that you want to download. Since this file is usually about a few tens of kilobytes in size, it hardly takes any time. Once you've got the torrent file, you point your bittorrent client to this torrent file and then the client takes over. It contacts everyone who has bits of this huge file and requests them for it.

A nice and short Bittorrent tutorial can be found here.

The difference between bittorrent and other P2P clients is that bittorrent splits the file into many pieces and then downloads the different pieces from different nodes. And the file gets aggregated locally. Unlike other P2P clients where once you've made contact with another node which has the same file, the client tries to download the entire file from that node. So in their case, the speed of download depends upon that node's upload bandwidth and your download bandwidth. Bittorrent conveniently bypasses this bottleneck!

The democratic part about bittorrent is that as soon as it gets one piece it immediately shares it with everyone else as well! That speeds the overall download!!

Now, once I got the concept, I got thinking. Usually, bittorrent is getting used currently only for downloading pirated movies and software. But maybe it can be put to better use!

And here's the earth shattering idea! All those websites are files, right? HTML files, but files nonetheless. And a file is a file, right? So, why not use the same bittorrent concept to them as well? Take any of the login pages, for example. Yahoo, Hotmail, even Google or Rediff. Why do I have to download the file from San Francisco, or wherever the server is placed. Why don't I download it from my neighbour who checked his mail a couple of hours ago?

And since HTML pages have the date and time when they were created, say if I want to check the news at BBC as of this morning, instead of like, right now, I shouldn't have to go all the way to the BBC server. I should be able to get whoever has downloaded the page since this morning, who's geographically closer to me and who's online. And if there are many people online, I can get the different bits from everyone and my page will load even faster!!

What do you think, should I patent this idea? I don't have a goddamn clue about the technology behind all this, so I'm clueless as Dr. Watson after a murder.

Any Sherlocks out there?

0 Comments:

Post a Comment

<< Home