Virtual Machine Server Hosting

My employer, Vertigo Software, graciously hosted this blog for the last year. But as blog traffic has grown, it has put a noticeable and increasing strain on our bandwidth. Even on an average day, blog traffic consumes a solid 30 percent of our internet connection-- and much more if something happens to be popular. And that's after factoring in all the bandwidth-reducing tricks I could think of.

This is a companion discussion topic for the original blog entry at:

You might want to consider making an entry on just what your statistics look like. Or even putting up a stat/graph page. I did it with my website:

It sounds like a beefy box, but I’m wondering just how much performance is lost with virtualization? Of course that only matters if your pumping out some serious bits per second.

Ladar, all my stats are public and linked from the front page.

We use apache reverse proxying at our organisation (we’re a FreeBSD house). I guess this is similar to ISA Server. This allows us to move all of the SSL processing to a one place (+backup) meaning that the application servers (custom apache builds) don’t need to worry about it.

Another advantage is that all SSL certificates are held on the proxy machines, reducing the number of people with access to them.

As far as I know though, each SSL site needs it’s own IP.

Hey TristanK - I agree with your assertion that wildcard certs can help somewhat with this issue, BUT I wanted to point out that due to differences in interpretation of the spec, that generally wildcard certs only work on “three-level” domains e.g. Four-level ones like you give in your example will cause certain very-common browsers to give warning messages and choke.

Musing, it might be one cert per socket rather than per IP. Unfortunately, that’s a next-to-meaningless distinction because everyone’s interested in the one port 443 (i.e. nobody wants their website to be the one you have to type a port number for (eg, and it might need reconfiguration at the client-side firewall/proxy too to allow SSL tunneling on a nonstandard port…) and there’s only one tcp/443 per IP.

So, let’s assume one IP per certificate is a requirement based on ISA and Windows’ current implementation. The problem then becomes that all those IPs need to be externally visible at some point (assuming that we’re not dealing with SSL Host Headers).

If you’re running, say,,, and so on, these can potentially share a wildcard certificate and use only a single server IP for all those sites.

If your namespace is largely-hierarchial-but-also-quite-flat :slight_smile: you may be able to significantly reduce the number of ips and certs required (think - all those sitennnns could share one cert…), and it reduces both the number of IPs and certs to manage. It can be done today, so it’s probably worth investigating.

See also the ISA blog here:
Bottom line: In the future, hopefully an SSL equivalent to Host Headers is implemented. Right now, not by us, afaik.

ISA will terminate SSL connections itself (if you let it, which we’d usually suggest), inspect the traffic, then forward the request to the published server internally. So internally, you can use CrazyPKI (if that’s a real product name, I apologize to the vendor, but what were you thinking!? (and I claim first-use rights if not)) or self-signed certs, or no certs, and private IP addresses, so you’re golden there. The “cost” of SSL is in terms of public IPs. Internally, you can use different certs per published site or not (or no certs at all and everything on the one IP using host headers), without preventing you from using wildcard certs externally.

It’s really a question of where you want to manage your cert complexity - you can do it “out front” at the ISA Server/SSL Termination tier and organize sites however you want internally, or you can straight-through the SSL and manage the complexity at the web server.

Sorry for the rather long-winded “it’s up to you”…!

What virtualization software are you using?

First, your post was serendipitous. I just signed up a client with Crystal 5 minutes ago. You’ve never led me wrong before…

Second, I’m curious as to the Internet connection options the PWillis mentions. I work for a bank (unrelated to the site I mentioned above) and we HAVE to use a T-1 to connect us (in Kansas City) directly to our main office in Cheyenne, WY. We actually have 5 of them, at a cost of around $1200 per month each.

The home office is paying somewhere around $1000 per month EACH for 2 T-1s that connect them to the internet.

So, even if I’m here @ 3 AM on Sunday morning, my best access will only ever be 3 Mbit/Second (I think I got that right).

Are there other options for internet access? I’m a lowly programmer, but even I have a sneaking suspicion that the guys out in Cheyenne are all old, and don’t want to learn anything new, so they go with what they already know…ie, T-1s that were all the rage 10-15 years ago.

Anyone want to point me to a newsgroup where I could ask some questions and get some data to kick the guys in Cheyenne in the ass so they would get us some faster iNet access?

From my house, through Time Warner, I get 500k per second which seems literally like 10x faster than it is here at work. And that’s for $50 per month!

Jeff - thanks of the info on how you got them to give you 64-bit. I’ve been a CT customer for years so I know they have good support. I’m pretty sure you’ll be happy with them for a long time. The biggest mistake I ever made was switching one of my servers from CT to Godaddy dedicated hosting to save $80/month. Godaddy’s dedicated support is abysmal.

Yeah, this is called Reverse Proxying ( and in the OSS world it’s usually setup using apache and squid ( It’s also possible to do with IIS, Asp.Net and HttpHandlers ( . I like the Asp.Net solution where I integrate my own custom load balancer algos.

Oh yeah? I love AQuest Hosting.

I use Nginx too (mainly for the small memory footprint) with spawn_fcgi to serve dynamic pages. I wish Nginx would support .htaccess as moving stuff around is a nightmare.

What kind of wonky 20th century Internet connection is Vertigo running? They should be able to host this blog without impact.

Let’s visit some numbers:
Your homepage currently weighs 350KB.
Let’s assume that tomorrow you have 60,000 page views, and that all of them are for the homepage and the homepage only.
Let’s also assume that all of your readers are in one time zone, so your hits are distributed across 8 hours not 24.
That means you would serve 164063 Mb (Megabits) in 8 hours, or 5.7 Mb per second.

Keep in mind that this is an absolute upper bound – on your highest traffic day, everyone views the heaviest page and nobody looks at another page and they all do it in a compressed 8-hour day. Real-life numbers would be a fraction of this.

Bandwidth is cheap. You can get 15Mbps at home from FiOS. An office can get 100Mbps from a local fiber provider for $1000/month. So the worst impact your blog could possibly have is consuming 5% of the company’s bandwidth on the nightmare day outlined above.

… unless Vertigo is partying like it’s 1999 and using (gasp!) a T1 or something. So I ask you: why was this a pain point at all?

@Eric @TristanK:
I’ve seen this elsewhere with a hosting company but I’m not sure how secure it can be.
If a customer wanted to host an SSL site and didn’t have certificate or their own public IP associated with their account the could use the hosting company’s certificate on a subdomain of the company and the traffic would be proxyed to their corresponding website.

Example: If your website would be you could redirect your customers from to which uses an ssl certificate from the hosting company and traffic from this site would be proxyed to your

Man, Squid would have a ball serving up something as static as this blog.


You do realize that on the average weekday This site has 40,000 page views? That’s frighteningly close to your absolute upper bound already. And not that I know much about business connections, or even the size of a company like Vertigo, but I’d imagine that 5.7mb a second is a good chunk of bandwidth. Even if it’s 10% of the capacity of the connection, It’s not going to all come at once. There will be times when this site takes 1% of the link, and others when it takes 80%, and if the combined other usage is 30% then that’s a problem.

CrystalTech is rolling out the 64-bit machines sometime early/mid November. If you shoot the CT sales dept. an email they’ll give out some details.

Interesting read, and I like seeing CrystalTech getting some props. I’ve been using them to host client websites for years now and CrystalTech has only got better and cheaper over time.

Man, no wonder you need that much hardware, if it’s running Windows Server natively and several more copies of it in virtual machines. I used a Pentium 200 with 64MB of RAM as both server and firewall at school; with a full LAMP stack, MySQL, Samba, an IRC client, and other bits running, it still had no problem transferring a few megabytes/sec over the school network. I put Tomcat on there as well for one class, and that was mildly laggy, but it didn’t affect anything else.

Cool - I’ve worked with CrystalTech quite a bit in the past. One of my clients used their dedicated servers for their 20+ customers. In my experience, their support was leaps and bounds above most web hosting/co-lo companies out there, and it was really nice to actually be able to talk to a human being when I needed to.