Andre's Blog

Personal blog of Andre Perusse

JavaScript: Rise of the Single Page Application

Last month I wrote about the current state of HTML application development and how it has changed drastically over the past five years. In that post I mentioned a lot of technologies that have evolved, but one area in particular that is in the midst of an amazing sea change is where the user-interface logic is executed.

Historically, most web applications have used a server-templating model where an application server such as Microsoft's ASP.NET processes data and emits it as HTML which is rendered on the client. If the user clicks a button or link on that web page, an HTTP POST or GET operation is performed on the application server and a completely brand-new HTML page is returned to the client browser to be rendered again. The problem with the server-templating model is the annoying screen refresh that occurs between postbacks and page re-draws, and the amount of time required to re-render the new page, often when very little actual data on the page has changed. AJAX techniques can aid immensely with this problem, only updating the areas on the page that have changed. However, this is only half of the solution and a further step must be taken to bring a truly rich user experience to a web-based application.

Enter the JavaScript "single-page application" (SPA). This is a relatively new technique (well, Google's GMail has been using it since 2004 but it's only now starting to catch on in the mainstream) where only a single HTML page is downloaded to the client browser and all further updates are performed by JavaScript code. In fact, the single page that is downloaded is often no more than a shell used to bootstrap the JavaScript application. Once initialized, the application runs completely in the browser with no further HTML pages generated by the server. Instead, the JavaScript code is responsible for creating the HTML DOM elements and binding data to them. This reflects a truer image of the client-server architecture that has existed for decades and is the model most frequently used in Flex and Silverlight applications that run in a browser plug-in (no HTML).

This is not to say that the application server no longer serves any data, it simply no longer wraps that data in HTML. That is the job of the SPA running in JavaScript code on the browser. JavaScript will issue a remoting call to the server for data, which is most often returned in JSON format. Using a client-side databinding framework (such as Knockout.js) this data is then bound to an HTML construct. Any changes are sent back to the application server, once again in JSON format, where the server then performs operations on the data, such as updating a database. The result is a more responsive user-interface that works much like any other rich application developed in more traditional client-side technologies.

From where I'm sitting, 2012 is shaping up to the be the year of the SPA. There has been a tremendous explosion in the number of JavaScript libraries and frameworks designed to provide more power to the client-side developer. Which is truly fantastic - front-end developers have more capabilities at their fingertips than ever before, and it just keeps expanding every month. However, the exploding popularily of JavaScript components is also part of the problem. Many, like me, have no idea where to start. There certainly isn't a clear winner in the race to establish the de-facto framework, and in fact many existing frameworks are happy to just provide what they believe is necessary to fill a given need. Assembling all the pieces to create a complete framework for a web application project is still a task left to the developer.

Which is not so much a bad thing, but compare this with the RIA frameworks of Flex and Silverlight. Using these rather completely-featured frameworks, you may wish to incorporate the odd component or pattern and maybe some additional UI controls, but JavaScript SPA developers have many more choices than that to make. One of the big problems with JavaScript in the browser is that it was never designed to house large applications. Modularizing your JavaScript code involves a peculiar set of work-arounds to address the fact that JavaScript is more object-based than object-oriented (though some may disagree), and requires the use of closures to achieve some degree of modularity. Among other things, this often leads to what can best be called namespace-collisions since all variables defined outside of a function in any script file are invoked in the global namespace. So when you bring all these pieces together in your application you will often find conflicts that you must resolve yourself.

Despite these challenges, the SPA is emerging as the preferred methodology for developing rich and responsive applications in a browser (in no small part because Flex and Silverlight aren't supported on increasingly popular mobile devices). User-interface control vendors are now beginning to offer commercial JavaScript control libraries that have no particular server requirement at all - they consist only of client-side JavaScript code. Developer productivity is still a challenge in this environment as IDEs still have a difficult time with the dynamic nature of JavaScript. But advances are being made and the SPA space is absolutely electric with increasing developer interest and daily news on technique improvements. Indeed, it will be very interesting to see and participate in the SPA revolution over the next couple of years.

Web Development in 2012: This Ain't Your Daddy's HTML

For the first 11 years of my professional career I designed and built applications using HTML. When I first started out, I mostly just specified the application design, having other people much smarter and talented than me do the coding and graphic design, and I would write the HTML myself (back in the day web "programming" was done in C in the form of CGI scripts which was nothing I wanted anything to do with). With Microsoft's Active Server Pages I graduated to performing the coding, too, which was a boatload of fun.

Over the years I considered myself pretty darn clever at web application design (truthfully, I was just a hack but I "thought" I was clever). In 1998 I was using Internet Explorer 4 and Microsoft's Remote Scripting feature to refresh data on a web page without doing a full server refresh. This was years before AJAX became a known term much less a popular feature. A few years later I was using the XMLHttpRequest object to do the same thing. Naturally this required a firm grasp of JavaScript and the browser Document Object Model (DOM), though fortunately I wrote internal corporate applications and cross-browser compatibility wasn't much of a concern (and Microsoft had soundly destroyed Netscape by this time anyway).

I continued honing my JavaScript and DOM knowledge and skills up until about 2007. You remember 2007, don't you? Well, let me refresh your memory anyway. There was no Google Chrome. Mozilla Firefox was barely an up-and-comer. Internet Explorer had only reached version 7 three months ago (October, 2006). Hell, even Facebook had just opened itself to public registrations in September, 2006. AJAX was pretty much a solidified term, but its use wasn't very wide-spread yet. Web services were all SOAP-based (no REST for you!) and jQuery was just barely starting to gather some traction as the de-facto JavaScript abstraction library. HTML and the browser DOM had stagnated for so long that developers were clamoring for a better future.

Enter the era of the Rich Internet Application (RIA). Adobe Flex 3 was released in June, 2007 with a much improved developer feature-set and no longer carried a server licensing requirement. It was way more powerful than the state-of-the-art HTML being used then and many, many web development projects were transitioned from HTML to Flex. Flex's popularity grew for several years and inspired Microsoft's own RIA entry, Silverlight. It seemed as though the future of web applications would be written for browser plug-ins. My own career took a slightly different turn at this point, and for two or three years I worked on .NET WinForm applications but have now found myself on a Flex project for the past little while.

Recently, I've had the opportunity to return to my roots and begin working on a brand new web application. Ah yes, HTML - my old friend. I cracked my knuckles and sat down to dig into everything that was new and wonderful in the world of web development. Even though I had made it a point over the years to stay up-to-date on the terms and technologies being used and advanced in the HTML world, I had no reason to actually learn about them in any depth. So I started to bootstrap my new application, and I have been awash in the staggering number of current techniques that are employed in modern web application design. This ain't your daddy's HTML.

Well, actually, HTML itself hasn't changed too terribly much. The latest version, HTML5, has some new tags, sure, some to help with semantic markup and the Canvas element itself seems poised to open a whole new world of capabilities. It also has its own Video tag to replace Flash videos and adds local isolated storage. But the real changes are with CSS3, JavaScript libraries and frameworks, and the server programming model itself. CSS3 has just about turned into its own programming language, no longer satisfied with simply defining static layout and appearance. While it's unfortunate that complex CSS must still make use of the benefits provided by pre-processors (like LESS), CSS attributes now support complex 3D transformations and even the rule selectors now have a dizzying array of capabilities. Some animation requirements can now be completely defined in CSS without the need for any JavaScript at all. And I thought having rounded corners without images was cool!

And speaking of JavaScript, wow - has it ever come a long way since 2007. Today, only a fool writes JavaScript to directly manipulate the DOM and update HTML elements. jQuery is the de-facto standard library for not only abstracting away the confusing and aggravating differences in DOM implementations across all the various browsers and versions, but also for adding some truly amazing capabilities to JavaScript. And its "fluent" API construction provides the ability to perform multiple operations with a single line of code. It is completely AJAX-aware, making rich client-side interfaces a breeze to implement. jQuery UI, while somewhat less popular that jQuery itself, builds on top of jQuery to provide some user-interface goodies I could only dream of back in 2005.

But jQuery isn't the only JavaScript change in town. JSON (JavaScript Object Notation) has seemingly out of nowhere replaced XML as the preferred inter-platform data exchange format. Like XML it is "human-readable" but it comes without all the heavy tag "weight" of XML and is immediately consumable by JavaScript code without the need for any kind of parser. Marrying data to HTML elements is a new class of JavaScript frameworks that implement the MVC or MVVM patterns, like Knockout.js. These frameworks simulate the automatic model-binding architecture available in other UI frameworks, such as Windows Forms, Silverlight, and Flex.

So, the world of web application development on the client has become much more complex. While there are now a lot of tools out there to help put together a truly advanced UI experience, they continue to be disparate and must be implemented individually. There is yet another class of "utilities" out there that aim to alleviate this problem - the web application template. HTML5Boilerplate calls itself "A rock-solid default for HTML5 awesome" and includes many of the above JavaScript frameworks and a set of cross-platform CSS rules to give developers a strong starting point for new apps. While it brings a lot under one single download, there is still a ton of technology to learn.

And it's not over yet. The server programming model that I came to know and love, ASP.NET, is completely unrecognizable these days. While the old WebForms model still exists and is still getting some love from Microsoft in the latest .NET 4.5 release, the current wonder-child is ASP.NET MVC (which stands for Model-View-Controller). Today, you would be ruthlessly ridiculed if you started a new project using WebForms. ASP.NET MVC does away with the old server control and viewstate style of development and developers now (one could say "again" since the old Active Server Pages framework had the same benefit) have complete control over the HTML output rendered by the server. And the MVC system aims to provide a better "separation of concerns" among the various classes on the server, ostensibly making automated unit-testing a much easier task. Unfortunately, all the skills I learned about handling the WebForms Page object and event model are now completely useless, and so are all the third-party server control libraries I used. But hey, I wouldn't be in this business if there wasn't something new and exciting every other week.

Speaking of new and exciting, what about tooling? Visual Studio 2012 is now out in release-candidate form and this release is all about web development. It seems with Apple's refusal to allow Adobe Flash to run on iOS, and Microsoft's similar stance on eliminating browser plug-ins in the Windows 8 "Metro" version of Internet Explorer (which means no Silverlight, either), we have just experienced the RIApocalyse (a term recently coined by Dan Wahlin) and the whole world is swinging back to HTML development. Visual Studio now ships with jQuery and much work has gone into improving the JavaScript development experience, including vastly improved Intellisense. Add a better CSS editor and a new feature called the Page Inspector and you have a very capable design-time experience. And don't forget the addition of NuGet - a package manager for automatically adding and updating various frameworks, utilities and extensions to your IDE and application projects.

I'm already exhausted just writing this post, and I haven't even mentioned Entity Framework - Microsoft's latest data-access technology du jour. So if you're like me and used to be a web developer back in the glory days of the mid-2000s, then moved into other work and are now back in the HTML world, prepare yourself for a lot of re-learning. The game has certainly changed, but I don't think there's ever been a better time to be a web developer. More capabilities, more utilities, more frameworks, more power!

Seasonic Platinum 860w Power Supply

Generally speaking, desktop computer power supply units (PSU) aren't very exciting. You plug them into the wall, flip the switch, and your computer turns on. There just isn't much more to them. Or is there? It may come as a surprise to some that since a computer runs on electricity, the unit that provides that electricity is one of the most important components in the computer system. Poorly designed and/or cheaply manufactured PSUs are the cause of many issues with modern computers, including sudden system crashes, data corruption, and display glitches. Naturally, a more demanding and higher performing computer is more susceptible to the imperfections of a less-than-adequate PSU. In addition, power efficiency has in recent years become more and more important and PSU manufacturers now proudly display the efficiency rating of their products.

The fact of the matter is that if your computer is experiencing any sort of random malfunction, the PSU becomes a prime suspect. And so it was with the last PSU I had purchased, a Corsair HX620w. While highly rated and with many satisfied customers, my particular unit suffered from an affliction that resulted in my hard drive periodically refusing to operate properly. Instead it would just sit there, clicking away whenever data was requested. I returned several hard drives believing that I had been astonishingly unlucky by receiving several bad units. I finally moved the drive to a different power connector on my Corsair and it worked fine ever since. But my trust in that PSU was shattered and when it came time to look for a new one I turned my eyes to the legendary Seasonic.

Most PSU "manufacturers" actually outsource the actual production of their units. Seasonic builds its own and their PSUs are regularly very well reviewed by those with fancy load-testing equipment that can observe even the smallest voltage fluctuation and the most minor of current ripples. So, after several months of intermittent research, I decided my next PSU would be a Seasonic. I eventually settled on the Platinum 860 for two reasons: it is utterly silent at up to 40% load, and it is the most efficient PSU money can buy (at the time of this writing, anyway).

Of course, I also wanted my new PSU to be modular, meaning that there are no cables "hard-wired" into the unit. This results in less clutter inside in the computer case since you only include those power cables that you absolutely require. The actually wattage of the PSU was the least important factor in my decision. While many PSUs are approaching and even surpassing the 1000w mark, many tests have shown that even a demanding CrossFire or SLI system with two power-hungry video cards rarely needs more than 600 watts of power. But the lowest available wattage for Seasonic's "Platinum" efficiency series is 860 watts, so that's what I bought.

Now, let's be clear. This is way more PSU than I, or just about anyone else, needs. But there's something to be said for buying and working with a finely engineered piece of equipment. I believe this is one reason that Apple is so successful with its products. In fact, I was so pumped when it was delivered to my house, I performed the much revered "Japanese Unboxing Ceremony" (well, that's what I call it, anyway) on it. And, indeed, much thought and care has been taken in the packaging of this unit (see the picture gallery below). I especially liked the velvet pouch that the actual PSU was enclosed in, and the two-pocket vinyl bag for storing unused power cables. Also included were several zip-ties and velcro cable organizers.

Installation was simple and straightforward, and I had no problems with the length of the supplied power cables in my Antec P280XL, though one or two of them didn't have much slack left when I was done. Turning the unit on for the first time was somewhat interesting because, as I mentioned earlier, it is completely silent. In its "Hybrid" mode, the cooling fan doesn't even come on until the PSU reaches 40% load. And since I rarely play demanding games on my rig which would cause the graphics card to demand much more power, it's likely that I'll never hear the fan at all! It's also worth mentioning that other reviewers have stated that the Platinum series from Seasonic also has little to no "electronic" humming. Even with the fan off, the circuitry inside a PSU can still buzz annoyingly, but I haven't heard any noise whatsoever from this unit.

Performance-wise, well, what can I say? As one would expect, it powers my modest system just fine. I haven't experienced any random glitchiness or hard drive power hiccups like I did with the HX620. So I'm satisfied with it, but I recommend you seek out the super-geek sites that really put these PSUs through their paces, and measure every flutter and waver in the 12v and 5v power rails. The Seasonic Platinum 860w commands a premium price for performance that few truly need, but its build quality and attention to detail make it a solid foundation upon which to build any system. And you'll feel good knowing that you've bought some very solid kit.



Antec P280 Computer Chassis

I make my living writing computer software, but computer hardware is actually my first love. I LOVE HARDWARE! Motherboards, CPUs, video cards and hard drives. Nothing gets me quite as giddy as when the UPS man delivers a package from my favorite on-line computer store. And so it was last week that I received TWO packages - an Antec P280 chassis and a Seasonic Platinum 860W power supply (PSU).

I've been using an Antec P180 (the original Performance One model) chassis since just about the time they first came out. I can't be certain, but something tells me that was in 2005. Seven years is a long time to hang on to the same computer chassis, but the P180 was such a stellar performer that there never really was any reason to replace it. For about the same amount of time, I've also had a Corsair HX620W modular PSU. The Corsair may have worked fine for a year or two, I'm not sure, but I returned several new hard drives that had the "click of death" before I realized that my PSU was the real culprit. I was able to keep limping along with the Corsair fine for the most part, moving my hard drive to a different modular connector, but I was still plagued with flickering white levels on my monitors. I originally blamed my Radeon 5870 for this before once again pointing the finger at the Corsair.

I decided that I wanted to wait and get both a new chassis and new PSU at the same time. (Actually, I was waiting to get a whole new Ivy Bridge system but decided to get the chassis and PSU now.) But there really wasn't anything compelling in the chassis department. I could have moved to Antec's P183, but it was so close in design to the P180 that it didn't seem like a worthwhile upgrade. Silverstone has their FT02 which is a VERY nice, elegant design but is quite a bit larger than the Antec cases, and also significantly more expensive. I was also impressed by Fractal's Define R3 chassis, but at the time it lacked USB 3.0 ports and there was some concern over fan noise. So I waited and waited. And then finally, the P280 was announced late last year and I knew it was my next chassis. Still very similar in design to the original P180 including the "270-degree" fold-back front door, but now with front USB 3.0 ports which have also been moved to the top of the case where they are much more convenient. Also more convenient are the power and reset buttons on the top, no longer requiring me to open the front door where they are hidden away half-way down on the P180. Minor enhancements to be sure, but welcomed. Of far greater note are the superior cable routing capability, much quieter fans, and generally improved cooling performance (see below - CPU and GPU temps are 2 to 4 degrees better, while the motherboard and hard drive temps edge a little higher).


P180 temps - click to enlarge
P180 Temperatures
P280 temps
P280 Temperatures


Working inside the P280 is an absolute delight. Gone from P180 is the separate "power supply zone" baffle and the interior is now wide open. Add to that the cable routing ability behind the motherboard (which has become a standard feature these days) and you have an environment that is no longer cramped and confined, but rather one that is open, clean, and organized. Here is a look at the insides of my P180 compared to the P280. Note that in the P180 I had moved my hard drive to a rather sloppy, unsecured position just sitting on the PSU zone divider. This was a "frustration" move to get the drive closer to a different power connector, since sharing one with my SSD drive in the bottom drive cage was causing the "click of death" I mentioned above. But even without the hard drive, you can see the mess of cables everywhere and how difficult it is to route cables, especially from the PSU to various points on the motherboard, graphics card, and drives. A look at the insides transplanted to the P280 is comparitively-speaking a work of art! Everything is neat and tidy, greatly improving air-flow and making maintenance a breeze, too.


Antec P180 - click to enlarge
P180 - Messy
Antec P280 - click to enlarge
P280 - Tidy


Other ease-of-use changes in the P280 include the tool-less install of 5.25" drives, like my LG Blu-Ray writer. Just slide the drive in and it locks in place via a cantilevered plastic locking tab. I further secured it with a couple of screws, but you can only do that on the right side - there are no screw holes on the left side. Still, with the two right-side screws, it's secured in there pretty solid. One screw secures my 2.5" SSD in the top 2.5" drive cage (there's room for one more) and my 1.5TB hard drive is secured via four screws to a platic caddy (with silcone grommets for vibration isolation) that slides into the main drive cage. Of course, the PSU is on the bottom of the chassis, in the same place as the P180 but without any separating baffle this time. There's also a vent right below the power supply to aid with PSU cooling. Also of note is the use of thumb screws for both side panels, and the expansion (PCI) slot covers.

Chassis cooling is provided via three 120mm Antec "TwoCool" fans - 2 on top and one on the back. The speed of each fan can be adjusted individually via switches on the back of the case. There is a low and high speed setting, and I have mine set on low. The fans are significantly less noisy than the P180's three "Tri-Cool" fans which I had all set on "Medium". I also had a 120mm orange Nexus fan on the front of my P180 to draw air in. Though you can mount up to two 120mm fans on the front of the P280 and/or another two 120mm fans on the other side of the main drive cage, I find that the provided fans offer the same cooling power as my P180 - but with one less fan and much less noise. In fact, the unit is all but silent from my sitting position about three feet away. There is still an intermittent resonant hum from the case which I expect will prove little problem to eliminate once I get the chance to spend a little time tracking it down.

The P280 is substantially less heavy than the P180 "beast" that it is replacing (the P180 was 14kg while the P280 weighs in at only 10.2kg), it doesn't really feel less sturdy. The P280 is ever-so-slightly larger, too, but not enough to make a fuss about. It's worth the extra room for the cable routing ability. However, I do miss the "DeLorean" look of the aluminum side panels on my P180 (see image gallery below) but I suppose there's nothing wrong with flat black.

Overall, I am very satisfied with the Antec P280 and I look forward to owning it for another seven years. Highly recommended.


Windows Home Server 2011: Custom PC Build

I've blogged before about Windows Home Server (WHS) and its advantages as both a NAS and automated backup centre for your entire home network. For several years now I've been using an HP EX470 (modded with a quieter power supply fan, 2GB of RAM, and an AMD LE-1640 CPU) to which I've added two 1.5TB drives. It's been chugging along 24/7 since the day I bought it, and it works well.

But, I feel I'm losing a lot of geek-cred running my server on an old creaky Windows Server 2003 platform. WHS 2011 has been out for a while now and is based on the much more modern Windows Server 2008 R2 platform. I, like most WHS fans, was initially miffed that Microsoft removed Drive Extender from this new WHS version. Drive Extender is the feature that lets you seamlessly and painlessly add more storage to your server just by inserting a new hard drive and pressing a button. So I've not been in a big hurry to move to WHS 2011 since it requires more brain-effort in setting up a scheme for storage. However, time marches on and I feel it's time now to replace the venerable HP box.

Unfortunately, HP is no longer in the WHS business and there are very few other OEM options (in North America at least). One is left with no choice than to assemble a custom-built machine on which to run this new WHS version. Thus, I set out a few weeks ago to start researching available hardware to build a nice server/NAS box. And boy, it wasn't easy. At first, I wanted to use as small a case as possible to replace the diminutive HP EX470. Well, there is NO computer case that can house 4 hard drives that is as small as the EX4xx machines (except for HP's own Microserver, but it doesn't have a WHS OS option - I also wanted a more powerful CPU). What I did find that was close was Fractal's Array R2 Mini ITX case. It has room for not 4, but 6 hard drives and is only slightly larger than the EX470. After extensive research on this unit, however, I became turned off by reports of it being an absolute nightmare to install the actual hard drives (you have to remove the entire drive cage first), questions about its cooling capacity, and being unsure if the CPU cooler I planned to use would even fit. So I moved on to try to find something else.

Slightly larger than the Array R2 is the new Lian-Li PC-Q25. However, hard drive management appears to be much easier in this unit, and the cooling capacity also seems much better. Unlike the Array R2, though, the PC-Q25 doesn't come with a power supply (PSU) and there isn't much room to install one, either. Finding a modular PSU with a maximum depth of 140mm wasn't easy, but I eventually discovered the Silverstone Strider 500 plus. This is the case and PSU I finally picked, though I also flirted with the idea of using an mATX case instead, including the Fractal Define Mini (an absolutely awesome case), a few Lian-Li units (why the hell did they have to use all those blue LED fans?), and even Antec's aging Mini P180. In the end, I decided that I didn't want a case as large as an mATX footprint, so the PC-Q25 won out.

Next up was the choice of CPU and motherboard. WHS doesn't really need a lot of horsepower, but I use it not only for backups and NAS duties, but also as a media streaming server for uncompressed 1080p material. I would also like to install SQL Server on it for some light-duty tasks, such as hosting my personal TFS source-control system. Depending on how loud it eventually turns out to be, I may also move it to the living room to serve double-(triple?) duty as my HTPC. So, I decided on an Intel Core i5 2500K. Overkill, to be sure, but it's only $100 more than an i3 2100 and for the added lifespan of the server, why not? For the motherboard, I really wanted to have an integrated Intel NIC so I chose the Intel DQ67EP mini-ITX board. It doesn't play well with Windows server OSes, but there are well known work-arounds to get the chipset drivers installed.

All that was left was to add some RAM, a hard drive, and a CPU cooler. WHS 2011 is a somewhat crippled version of Windows Server that only support 8GB of RAM. I selected 2 x 4GB sticks of Mushkin Silverline Stilletos for this. For a hard drive I picked a 2TB Western Digital Green drive to act as the system drive and video storage. This drive won't be mirrored or backed up. I was then going to move my two 1.5TB drives as a mirrored volume for documents, pictures, and other important files. To cool the CPU, I picked the Scythe Big Shuriken 2 as it's a low profile unit which should fit well with an mITX system.

Case: Lian-Li PC-Q25 $120.00
PSU: Silverstone Strider Plus 500 $80.00
Motherboard: Intel DQ67EP mITX $140.00
CPU: Intel Core i5 2500K $220.00
CPU Cooler: Scythe Big Shuriken 2 $45.00
RAM: Mushkin Silverline Stilletos (8 GB) $50.00
Storage: Western Digital 2TB Green $140.00

It took me over two weeks of heavy research to pick the components for my new server build. My biggest issues were the case and the motherboard. There aren't many NAS-oriented cases on the market, and motherboards with Intel NICs are rare. But I finally made the decision and was ready to go. Except when I added it all up, I exceeded my budget by a good $300. I wanted to bring it all in for less than $700 if possible, but after taxes and shipping my custom WHS 2011 build came it at close to $1,000. Ouch. So for the time being, I'm going to stick with my trusty HP EX470 and hope it can last at least a few months longer.

Making The Jump To LightSpeed - Bell Aliant FibreOP

Ever since Bell Aliant began their rollout of FibreOP in New Brunswick two years ago, I've been anxiously awaiting its arrival in the Halifax area. Early this year, it was announced they would be rolling out the service in HRM over the summer months. Summer came and went, and while FibreOP was deployed in my neighbourhood in August, my cul-de-sac street was passed over. Well, ultimately in mid-October it became available to my address and I made the appointment. A week and a half later, I finally had my 70/30 internet connection and the FibreOP TV package.

I've always been an eager beaver when it comes to faster internet speeds. Back in 1998 I was torturing the phone company (then called MT&T) on a regular basis to bring their ADSL offering to my town so I could get off dial-up. That eventually happened, of course, and while the DSL speed was increased over the years, it finally topped out at about 6.5 Mbps downstream, and only 0.5 Mbps upstream. It was serviceable, but way behind the times especially with Eastlink offering a 40 Mbps service. Still, I had my Sympatico email address that I didn't want to give up, so I stuck it out with Bell Aliant.


On the day of the appointment, I got up bright and early to await the arrival of the FibreOP tech, who would be here "between 8am and 6pm". After much hand-wringing all morning thinking the appointment might be cancelled, the techs (2 of them) arrived at about 1:30. They got right to work stringing a fiber-optic cable from the telephone pole to my house, and preparing the ONT (Optical Network Terminal) inside my house. About 3 hours later, they were all done and I had 70/30 internet and FibreOP TV.

Connecting the fibre optic line.
Hooray! He's here!


First off, the internet speed is a world of difference from the High-Speed Ultra DSL service I had been using. I wasn't sure if regular web browsing would be much different, but it is noticeably more responsive. Of course, it depends on the site you're using, but most popular sites now completely load in sub 1-second speed. Some even feel nearly instantaneous. Since I never used Eastlink's internet offering, I can't compare the experience with them, but it is a much better experience than DSL. As for file transfer rates, as you might expect, it is an order-of-magnitude difference. With DSL, my downloads would max out at around 0.8 MB per second (800 KB/s). When I can find a server with big enough pipes, my FibreOP downloads will sometimes reach 8 MB/s, though 5 MB/s seems to be more common. This makes a tremendous difference with usability, since as a Microsoft developer I am often downloading large installers from MSDN which I previously had to let run overnight, but can now run pretty much on-demand, as even a 1.5 GB download will take only 15 minutes. Oh, and my wife's web-browsing experience is unaffected when I'm downloading large files, which wasn't the case on High-Speed Ultra.

Upload speeds are also in a completely different class. This is extremely important with the advent of "cloud computing" and the internet-based backing-up of files. If you upload files regularly (such as using Carbonite or Mozy for backup, or even say uploading images to blog posts), you'll definitely appreciate the 30 Mpbs upstream capability of FibreOP.

The new "modem" is an ActionTEC R1000H (bottom) that dwarfs my old DSL modem (top).

FibreOP TV

Moving on the TV service, well, I had my reservations about moving off Eastlink to FibreOP TV. My thinking was that Eastlink has been doing TV for decades, and Bell Aliant has only recently gotten into that game. I like the "phone company" for my telephone, and the "cable company" for my TV (I have too many trees around my property for satellite service to be an option). Of course, both companies now offer pretty much the exact same services, and their pricing structure is such that, financially, it makes little sense to divide your services between the two. The "bundle" pricing from both companies is designed to ensure you are very motivated to keep all your services with that one provider, and FibreOP is no different. Even with the $15/month upgrade to 70 Mbps downstream, and opting for the "Best" bundle offering which includes the movie channels and HBO, I'll be saving somewhere in the neighbourhood of $75 a month (after the 3-month promotional price of $99 per month) versus having my cable service separately with Eastlink. So I decided to take a leap of faith and switch my TV to FibreOP and cancel my Eastlink account.

I've only had FibreOP for five days, but the TV service actually does seem to be adequate. Some individuals on internet forums had been reporting that the picture quality with FibreOP TV was a bit "soft" or less sharp than Eastlink. This may be true, but if it is the difference is very subtle. And it would surely be noticeable to me, as my "TV" is actually a 120-inch front-projector (smaller TV sizes are better at hiding signal flaws). And while the picture may not be as sharp (I haven't really decided yet if it is or isn't), what does seem to be gone, or at least much less prevalent, is the "macro blocking" on high-definition stations that was so frequent on Eastlink. Macro blocking is the pixilation effect you see on a video image when there is a lot of motion on the screen, and the video compression is turned up rather high by the provider. If my research is correct, this may be because Eastlink uses MPEG-2 compression while FibreOP uses MPEG-4 (H.264). MPEG-2 is good (it's the compression used on Blu-ray discs) but requires almost twice the bandwidth that MPEG-4 needs for similar image quality. So, perhaps FibreOP doesn't have to compress the signal as much due to the lower bandwidth requirements, but I'm really just guessing and I'm not very sure of my facts here. One thing that I am sure of, however, is the irritating lack of proper lip-synchronization on at least one channel (CityTV) with FibreOP. I'm not sure what the root cause of this is and I haven't spent any time trying to fix it yet, but it's a problem I didn't have with CityTV on Eastlink.

The ONT (Optical Network Terminal) attached to a joist in my basement.

The PVR that FibreOP uses is the Motorola VIP1216 running Mircosoft's MediaRoom IPTV software. It's a much smaller unit than Eastlink's Motorola DCX3400 unit and while the features are similar, the user interface looks completely different. Instead of the colourful, opaque UI on the Eastlink box, MediaRoom uses a translucent overlay on top of the video that's currently playing. I don't find either interface to be inherently better, but the MediaRoom fonts are much more smooth (no jaggies) and the TV listings show a 2-hour window instead of Eastlink's 90-minute window. Also, the FibreOP unit is nearly silent compared to the constant hard-drive spinning and clicking you hear from the Eastlink unit. In fact, I had to double-check that the FibreOP box even came with a hard drive because I couldn't hear it at all! The recording options are slightly more impressive than the Eastlink PVR I was using, too. The FibreOP unit can record up to four programs at once, although only two of those can be high-def. FibreOP's PVR is also "whole-home" capable, essentially acting as a kind of media server for all the TVs in your house. Eastlink also has a "whole-home" PVR option, though I've never tried either service - I only have one TV.

Not everything is rosy with the FibreOP machine, though. Eastlink's is definitely more responsive - I've found the FibreOP box hesitates more often, frequently requiring an extra second or two before it will process a command from the remote. Not a big difference, but noticeable. Also, Eastlink's PVR allowed me to plug in an external 1GB eSATA hard drive to obtain a vastly increased storage capacity. The FibreOP PVR only has a USB port but it currently serves no function. I'm not even sure if USB 2 can support the speed necessary for real-time recording of high-def video anyway. The hard drive in the FibreOP unit is a measly 160 GB (using a quaint IDE interface instead of SATA), but the MPEG-4 efficiency allows it to store just about the same amount as the bigger Eastlink drive. Still, not being able to expand the storage capacity of the unit is a definite disadvantage. Also, on occasion I've used the Firewire port on the Eastlink machine to record material to my computer for longer-term storage. No such ability exists with the FibreOP unit.

FibreOP TV has a robust offering of "video on-demand" (VOD) services, and so did Eastlink. I haven't bothered much with either, but I'm completely aghast at FibreOP's $7 price-tag for "rented" movies using VOD. Sorry, but that's way too much to charge for a movie rental. I'll be giving this feature a big miss.

Lastly, there are some differences in the channel line-up, both in available packages and stations. Moving from Eastlink, I've lost AMC, History HD and MovieTime HD (FibreOP doesn't offer AMC at all, History is in standard-def only and MovieTime is only available in standard-def in a $5/month theme pack) but FibreOP's movie package includes MPix, which was an additional charge with Eastlink. There are likely other differences (say, with Sports programming, but I'm not much of a sports fan anyway) but by and large, I'm satisfied with the FibreOP channel offerings.


Overall, I am extremely happy with my move to Bell Aliant's FibreOP services. I mostly made the move for the high-speed internet service, and opted to include the TV service only for the cost savings over a separate Eastlink account. Naturally, the internet offering blows pretty much everything else out of the water, while the TV option isn't bad at all. If you can live without AMC or some of the other channels only available on Eastlink, I would definitely suggest you consider FibreOP TV. As a bundled service, I would highly recommend it.

Silverlight: Displaying UTC Date Values in Local Time

I've recently started working on an existing project that's using Silverlight for the UI. One of the first things I ran up against was displaying DateTime values that come from the database in UTC format. I want to display these values in the user's local time zone, not as UTC. The DateTime object in .NET has methods that allow for the conversion of UTC to local time and back, but I didn't want to write a translation layer between the objects (entities) coming from the remoting call and the actual Silverlight UI. So, what to do?

Thankfully, Google is my friend. I quickly found some source code for a ValueConverter that does just this. Check out this forum post for the details:

After I created this class in a separate project, all I had to do was add this ValueConverter to the App.xaml's <Application.Resources> element, and I was good to go. Well, almost. I'm using Telerik's RadGridView control for Silverlight, so the column in question looks like this:

<telerik:GridViewDataColumn Header="Last Modified" DataMemberBinding="{Binding LastModifiedDate, Converter={StaticResource DateTimeUtc2LocalValueConverter}}" />

The only problem here is that the grid control wants this value converted into an Object and doesn't specify a DateTime as the target type. After a small modification in the ValueConverter code to allow this, I was good to go.

My Love-Hate Relationship with the iPhone

I have never been much of a fan of Apple. Perhaps it's because I resent them for surviving the platform wars in the 90's while my favourite technology, the far-superior Amiga computer, was squandered by Commodore and faded into obsolescence. Or perhaps it's because for all those Apple fans who beat the drum exulting the superiority of Macintosh machines, my experience reveals that Macs have the same amount of irritations as Windows machines, just different ones (side note: I sold Macs for over a year in the early 90s, around the time when Apple was switching to the PowerPC processor). For whatever reason, I am unable to embrace the religion of Apple and, in fact, I resist it at every opportunity. Which is often quite difficult since several of my respected co-workers are ardent Mac fans and I am inundated daily with Apple love talk while I grimace and grumble in the corner.

So then, what possessed me to place aside my hatred of all things from Cupertino and eagerly buy an iPhone 3GS 16 months ago on a competing carrier while I was only half-way through my 3 year contract with my HTC Touch (a Windows Mobile 6 phone)? Quite simply it was and still is the best smart-phone experience available. Even Android hasn't been able to achieve the same elegance and integration offered by the iOS device. Here is what makes the iPhone the must-have mobile device for me:

  • Large screen (for a phone, though Android phones are just as large now or larger)
  • Extremely responsive capacitive touch screen (again, Android matches it here)
  • Elegant and fluid user-interface (Android seems similar though I have never used it)
  • Availability of applications and extreme ease of purchase and installation
  • Unmatched media management experience
  • Killer-feature for me: it plugs into and integrates completely with my car stereo

While all these points are important, it's the last two or three that mean there is no alternative to the iPhone for me. While I'm not much of a music fan, I am a huge consumer of podcasts. I listen to them every day while commuting to and from work. It is the iPhone's media management capabilities combined with the ability to plug it into my car stereo that places it above all the rest. Other phones can be plugged into the AUX jack on my car, but only the iPhone integrates completely with it, allowing full control of the iPod feature via the radio's controls. And even if other phones had a similar dock connector, Android has almost no media story at all. How do you subscribe to podcasts and have them automatically updated on an Android phone? Windows Phone 7 is far better in this regard, relying on the relatively mature Zune media technology. However, Zune Marketplace support is extremely spotty outside the United States leaving the iPhone truly in a class by itself. Nothing can match the whole-package experience.

Which is why I was extremely frustrated when the iOS v4.2 update broke the compatibility with my car stereo. For over two months I was stuck listening to podcasts in the car with my headphones on like a sucker. Many users complained on the Apple discussion forums and conveyed the ambivalence of Apple technical support when they reported the issue. Even though the car stereo integration worked fine with every previous iOS version, Apple's official response to the v4.2 problem was "contact your vehicle manufacturer." Which brings up another reason I've been able to ward-off Apple religion for so long: Apple's astounding arrogance. Not that other tech companies aren't arrogant too, it's just that this makes Apple no different than anyone else. Lucky for me, a generous colleague who is a registered Apple developer set me up with the iOS 4.3 beta when it was finally posted and I'm happy to say it has fixed the car integration issue. But Apple has yet to release it to the general public.

So, while I'd happily switch my smart phone to a competing platform if I could, there just isn't anything else out there that meets my needs. It looks like I will continue to simultaneously praise and curse my iPhone for the foreseeable future. In the meantime, I'll keep my eye each new Android and Windows Phone release and hope for some kind of universal car-phone media integration standard. Yeah, looks like I'll be an iPhone owner for a long time.

Where is the Programmer's Keyboard?

A few years ago, I wrote about my obsession with computer keyboards. At the time, I was particularly taken with the Microsoft Ergonomic Keyboard 4000 though I really wanted a keyboard that would let me remove the numeric keypad so that the mouse could be placed closer to where my hands usually reside - in the middle of the keyboard. Well, fast forward over four years later and not much has changed. I'm still using the Microsoft 4000 keyboard both at work and at home, and I still wish the numeric keypad would take a hike.

As far as the numeric keypad goes, it turns out there are a few keyboards that either omit it completely, or, as in the case of the Microsoft Sidewinder X6 keyboard, allow you to move it to the left side. I've often wanted to try the X6, but it isn't available locally (I'd have to order it) and I've really become accustomed to the ergonomic layout of the 4000. However, recently I've become annoyed with another issue that I think deserves some mentioning.

I'm a programmer - I write code all day long. But writing code is a lot different that writing, say, a document in Word or even an email. Most keyboards are designed for writing prose - normal sentences and paragraph formatting. But code looks nothing like this. Code is on the one hand full of symbols, a LOT of symbols that require the use of the SHIFT key to type. Take for example the angle brackets used for most XML-style markup languages (like HTML, XAML, and Adobe Flex, for instance). Shift key. Do you have any idea how many angle brackets I type in the course of a day? Or C-style code-grouping "curly brackets" for classes and methods. Shift key again. Double-quotes? Shift key. Number (or hash or pound, if you prefer) sign, percent symbol, ampersand, and asterisk? All need the shift key. I've lately come to find that very irritating.

And on the other hand, the process of writing code often requires further keyboard acrobatics. For compiled languages you have to build your source code. Sure, you can use the mouse to click a "build" button or select it from a menu but that means, well, you have to lift your hand off the keyboard and use the mouse. Or, in the case of Visual Studio (VS), you can press not only the SHIFT key but also the CTRL key and then 'B' to achieve the same effect. It's really not that hard but you do have to cramp up your hand just a little bit to press the three keys at the same time. How about debugging? In VS, pressing F5 will "build and launch" your program with the debugger attached. How intuitive. F5, the universal key for "refresh" is aggravatingly re-purposed in VS. And it does absolutely no good if you have to ATTACH to a running process instead of launching it. I will also frequently paste in a few lines of code, which sometimes messes up the formatting. The acrobatics to correct this? Hold down the Ctrl key then press and release the 'K' key, then press and release the 'D' key (I call this the Kraft Dinner maneuver, since it's the only way I can remember it). And what about other common coding tasks, such as checking code out and checking it back in? VS doesn't even have default shortcut keys for that! You have to invent your own crazy key combination that hasn't already been used by one of its other five ba-jillion commands. 

Certainly, I am not the first to postulate this dilemma. A very popular thread on Stack Overflow has many suggestions for keyboards for programmers. Even the removal of the numeric keypad is mentioned! (See, I'm not completely crazy, or at least not crazy all by myself.) But, they are all still mostly general-purpose keyboards that just seem to have some characteristics that might make them more suitable for coders than other keyboards. And most of these characteristics simply have to do with tactile response of the keys - how far they travel, do they click, are they mechanical switches or cheap rubber membranes. Important characteristic, to be sure, but there is little that addresses the fact that writing code is a completely different ball-game from writing regular text, like this blog post for example.

Curiously, however, from that Stack Overflow thread is the suggestion of using the Logitech G15 gaming keyboard as a programmer's keyboard. The primary reason being the dedicated, programmable "marco" keys on the left-hand side of the keyboard. In the new G510 there are 18 such macro keys selectable in 3 banks for up to 54 macros. I'm going to do some further research on this, but it's not an ergonomic keyboard so I'm still skeptical. And I'm a little uncertain as to the proper interaction with Visual Studio.

Thus, I submit that our industry, the programmer industry, needs a keyboard made just for us. One that has common symbols a single keypress away just like regular letters and numbers. One that has dedicated keys for building, debugging, source control, finding class and member references, and a handful of other common coding tasks. Our industry is huge - surely we are big enough to viably support a special-purpose keyboard, no? 

Don't have an SSD drive yet? You're crippling your computer!

Congratulations! You've just bought yourself a shiny new computer! It has more CPU cores than you have fingers, several gigabytes of RAM, a Blu-ray drive, high-end graphics card, and ....... a spinning magnetic hard drive. You have the world's fastest computer but you've left it crippled, limping along with a dead-slow mass storage device. In fact, I'm constantly surprised by the new Dell and HP machines that come out with each new processor family from Intel, and there is no SSD option. This is almost criminal!

Unless you've experienced the speed of a solid-state disk (SSD), you'd be surprised to see how much drag a conventional hard disk places on your system. Last year, I wrote about the new Core i7 machine I had just built, and how I was initially disappointed that it didn't seem much faster than the Q6600 machine it replaced. Then I installed an SSD as the system drive, which changed EVERYTHING. A year later, this machine still screams with the extreme low-latency provided by my 120GB OCZ Vertex (which is old tech by today's SSD standards).

Check out this recent article by Windows guru Ed Bott which has some metrics on SSD drives versus regular hard drives. And check out this Tom's Hardware head-to-head comparison of today's SSD drives. I also love this quote from a Gustavo Duarte article that compares the latency of various computer memory and storage systems:

"To put this into perspective, reading from L1 cache is like grabbing a piece of paper from your desk (3 seconds), L2 cache is picking up a book from a nearby shelf (14 seconds), and main system memory is taking a 4-minute walk down the hall to buy a Twix bar... and ... waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months."

For most people, SSD drives won't replace your regular hard drive. SSD drives are still relatively low in capacity so you'll still need a big hard drive for your data. But even a 120GB SSD (around $200 these days) is plenty for your system drive, and I guarantee that it will be the best thing you can do for your personal productivity. Seriously, unleash the full potential of all those cores and gigabytes of RAM and get yourself an SSD - you won't believe the speed bump it will give your machine.