I often hear statements in the form of “X is cheap, don’t worry about it”. This usually comes up in design sessions, software or infrastructure, it doesn’t matter, you can usually find or hear a statement in this form in either type of design session.
The most common versions would be in reference to disk space, bandwidth, and CPU.
I suppose this type of thinking is a result of the effects of Moore’s Law. One of the troubling things about this sort of thinking is that it is frequently wrong and is indicative of someone with a closed mind, an inability to see the big picture, lack of insight, short-sightedness, or any other similar phrases. In all cases, when someone makes an X is cheap statement it’s a safe bet that they haven’t really thought things through. These statements are very dangerous, particularly if you happen to be in a design session when someone says this and is of particular concern if that someone is a team leader or manager.
The most dangerous aspect of these statements is that on the surface they may appear to be accurate and correct. Unfortunately, unless we are designing as system that people will only look at (ie. the surface), rather than actually use, it is rarely correct.
I’ll make an attempt to examine the most common of these statements to show how and why they are incorrect and assumptions. The analysis will be from a business/enterprise perspective rather than a personal/consumer one. You might be surprised to see how inter-related these can be! Let’s start with:
Disk is cheap, don’t worry about size
I will have to start by conceding the fact that disk storage has gotten very cheap indeed. The average consumer can put a terabyte of disk into their PC for less than $200. So how can that NOT be cheap?
We first need to understand what is really being said here. This usually isn’t a reference to disks and storage in general, it will be in reference to data files or databases.
So lets talk about files… in the past files weren’t that big, they were text files or word documents, and although the word documents were bigger, the size was still reasonable. This is no longer true, text files are rarely used anymore, the same information is now in a PDF file, or word document, an image, or a multi-media file. Those PDF and word files usually have graphics in them too, and to ensure that they look good when they print the graphics are stored at full resolution, even if they are only displayed thumbnail-sized in the document itself. The average document size is now measured in MB instead of KB, and for media files, GB is becoming more and more common.
So, what am I trying to say? The amount of storage that is available at a reasonable cost has increased by orders of magnitude, but so has the average size of the files that we store. The net result is that we can store about the same number of files as we used to before.
Even if we can store more files there is another cost involved. The more files and directories we have on our disks, the more difficult it becomes to find a file when we need it. If a user can’t remember where they put a file on a file server and they have to search for it, that potentially ties up a lot of I/O and CPU resources as the server scans all the files. That results in slower performance for all the other users relying on that server. That same server could be hosting a database, or a VM, or even just the virtual disk of a VM, and each of those would be affected by this search. Even if you don’t search for files very often because you are so well organized… those files are getting scanned anyway a couple times a day by your anti-virus and anti-spyware software.
Now lets look at databases… Modern database servers make them easy to use, quick to access, all those good things. So where is the problem. Let’s look at how a real production database gets used. To start with, you don’t just put a production database on disk, that’s too risky, disks can fail. You put it on a raid array instead, using mirroring, or striping, or whatever. The end result is that we now have two or possibly three (depending on the level of raid) copies of that database. We need to back up the database, that’s another copy… but in today’s web-enabled global economy, we can’t take the database server offline to back it up. So we have to take a snapshot to another disk so that we can make a backup of it without it changing while we write to tape, that’s another copy. Add some developers into the mix, any system that has a database probably has some developers working on new versions of the software and they need working copies of those databases to use for testing and development. That’s another copy (per developer) for each database. What are we up to now? 5 or 6 copies? Once again, the way we use that so-called cheap disk space quickly changes the cost per MB when each MB that we use is actually stored multiple times.
Backups become important too. Bigger databases take longer to backup, which also means they take longer to restore too. That can be crucial when the database for a web-commerce site goes down and we need to restore it. The companies that chose to believe the “disk is cheap” myth will be offline waiting for the restore a lot longer than the company that chose to use their disk space more wisely.
Storage in general is no longer local in a business environment. Files get shared on file servers, as well as through the use of SAN and NAS devices. This means that accessing the data on those “cheap” disks is now over a network and now bandwidth enters into the equation. A perfect segue into…
Bandwidth is cheap, don’t worry about the size of your data
Once again, we start with trying to understand what is really being said here. This is typically the speed of the available network rather than the cost of the network, although it can easily refer to both.
The speed and cost of networks has improved dramatically over the years. They have gone from 300 baud modems to 1200, and on and on up to 14400 and 56Kbps only to be replaced by broadband connections anywhere from 128Kbps to 10Mbps. Local networks went from proprietary systems to 10Mbps Ethernet, then 100Mbps, and now 1Gbps or wireless networks that started at 11Mbps and are now 54Mbps or 108Mbps.
What was once inconceivable to transmit over a network is now commonplace. So where is the problem here?
The first misconception comes from a single user perspective of the network. This is easiest to explain with an example. A typical office will have a 100Mbps network, providing 100Mbps of network bandwidth to each user. Sounds pretty good and we don’t use hubs anymore everyone uses switches, so user A doesn’t get impacted by what user B is doing on his segment of the network. Or does he? That depends, although in most cases the answer is yes, they are affected. Those network switches only isolate traffic between two end points. So, I might have 100Mbps between me and the switch, but ultimately I don’t need anything from the switch, I need it from some resource on the network. That resource might be a file server, or a database, or a website. There is a significant chance that I’ll be competing with the other user to connect to the same end point. So while we both have 100Mbps to the switch, we are sharing the 100Mbps from the switch to the common end point, so we’ve effectively been reduced to 50% of the “perceived” bandwidth. Chances are that it won’t be just two users connecting to that common end point, but a lot more. In a small company, you are probably competing will ALL the other users, so divide that 100Mbps by 10 or 20, bringing us down to 10Mbps or 5Mbps, certainly not the amount of bandwidth that you expected to get.
I will admit that I’m not being completely fair in my calculation, its far to primitive to be accurate, everyone would have to be accessing the same server at the same time for those numbers to be correct. Statistically we will get much better performance, exactly how much better is harder to predict since it depends on what is being accessed. Of course, the bigger the files we are after, the longer we send transferring their data, the more likely we are to be affected. This is where we can start to see the impact of the “disk is cheap” mindset on other (seemingly unrelated) areas of computing.
So, I haven’t proven anything other than you won’t see 100% of your bandwidth. We’ve only been talking about an typical office LAN, lets examine networked applications.
There are two typical varieties of networked applications. Those that explicitly use the network by implementing some form of protocol, and those that indirectly use a network through file sharing, web-services, SOA, or database connections. If a protocol is designed with the “bandwidth is cheap” thinking, you’ll probably find that it sends and receives lots of data. The networks are indeed fast enough to make this appear to be a non-issue. Unless you happen to be the the sysadmin or the hosting provider. More efficient use of bandwidth translates directly into more users, which translates directly into revenue. Once you max out your bandwidth with users, you might also need to buy/build new servers to handle more customers.
The more customers you can fit into the same bandwidth, the fewer servers and infrastructure you need, which directly affects the capital investment required (which again, affects Time-To-Revenue, since you have to recoup the infrastructure cost to become profitable).
More and more often you will find servers running in VMs, which wind up aggregating network traffic from all VMs to a single physical network. Applications running in VM servers that don’t make efficient use of the network could find the VMs running out of available network bandwidth before they run out of available CPU. If we have network traffic between two VMs, it never hits the physical network, but it does wind up consuming additional CPU possibly impacting performance of the other VMs.
When we introduce wireless and cellular networks to the mix the impact is more easily observed. Sure that new 802.11n wireless is fast, but when you have an office full of users streaming video over the same shared wireless bandwidth…
So if you encounter someone that suggests “X is cheap, don’t worry about it” in your next design session or troubleshooting discussion, you will know that what they are really saying is that they probably don’t understand the issue at all.