Archive for the ‘Technology’ Category

Size does matter

A good user experience requires responsiveness. Speed. Web pages that don’t make you wait more than a couple of seconds while they load, or even worse, load in bits and pieces and reorganize themselves in front of the user; “that’s the way these things work” isn’t good enough an excuse. Your users don’t want to know how your site works (even if your site is about how the internet works – they want to read about the problems, not experience them), they just want to get things done and move on. As Eve says in Gaiman’s Fables and reflections, “Some people have real problems with the stuff that goes on inside them … sometimes it can just kill the romance”.

Two of the culprits these days seem to be huge JavaScript and CSS files. They’re by no means the only causes, but they can cause trouble at times. Delays when loading a CSS file result in the dreaded flash of unstyled content on some browsers. Problems loading a JavaScript file… well, let’s just say it ain’t pretty. The delivery of these files can be slowed down by a number of factors. File size is one of them; a 168kb file will download considerably more slowly than a 6kb one.

This is made worse by the use of multiple JavaScript and/or CSS files. Separating functionality or styles in a sensible way is heaven-sent when it comes to maintenance, but the way the web works means that it’s a lot easier to download one largish file than several small ones. Multiple files mean that the browser must make multiple requests to the server, and each request carries a small overhead since the server has to include a certain amount of information with each response it makes. To top it off, most browsers are configured to open a limited number of connections to a server at any one time – IE8 allows up to 6 concurrent downloads on broadband, while Firefox allows 8; these connections must be shared between the JavaScript, style, images, and other embedded files. This can cause the downloads to be queued up on pages with a lot of stuff on them.

What we need, then, is a small number of reasonably sized files: how do we get to that?

Continue reading

Version control for the masses

Version control is one of those weird, geeky things that never really gained much ground in non-geek fields, despite the fact that it’s blindingly useful. Even educational institutions (at least the ones I’ve been able to observe) seem to prefer to omit so much as a mention of it in their technical courses. I can’t really give a reason for this, but it does at least give me the excuse to write a post about version control and kick it off with a rant.

So what’s this version control thing?

Version control (or source control) is nothing more arcane than keeping copies of your work as you make changes to it. On the surface, it’s all straight-forward; make a copy of every file you have before you make any changes to it. That way, if you seriously mess up, you can always fall back to something that worked before, or at least compare your broken copy with one that used to work so you can figure out where it went off kilter. Your client wants the image he told you to throw away two days ago? No problemo – out comes the backup. Accidentally deleted half your thesis and closed the word processor? No problem – out comes the backup.

Now, in the real world, it’s not so easy. Unless you have an iron will, a black belt in filing, and a zen-like ability to name files in a sensible way, you’ll be swamped with a huge number of backups with similar looking names. Something that’s impossibly difficult to find, might as well not exist at all. We want to get the goodies, but really need to keep all those backups out of our way. Luckily there are like, loads of version control systems out there to do the heavy lifting for you.

The rest of this post will be about how to set up a version control system for a single user. We’ll use Subversion, because it’s free, works great, and because I like it. Since we want to keep things as slick as possible, we’re not going to use raw Subversion though – we’re going to use TortoiseSVN, which is also free, also works great, and has nice coloured icons to boot. This neat tool lets you do most of the stuff you want Subversion for, but it lets you do it from Windows Explorer, rather than the command line.

Continue reading

Is this thing on?

Download sample code here (New window)

There’s been a project knocking around the back of my head for a while. I keep putting it off, doing some work on it now and again but never really settling down to really do it. This long weekend turned out to be one of those occasions; I’ve just finished playing Fallout 3 (Awesome game, the ending left a slightly bitter taste in my mouth though) and ended up with nothing to do. Visual Studio to the rescue…

Now what?

If you ever worked on an application that requires Internet connectivity, you will have had to handle situations where the connection may be unavailable for certain periods of time. Sometimes you can get by that by trapping an exception; I know I have, though I still find that particular solution to be somewhat inelegant. What I wanted was some way to monitor the state of the connection and keep track of it. This way, if you need to send a message for example, you can have your application decide whether it should try to send it right away, or whether it should stash it away till the connection becomes available. This post is about how to determine whether a connection is up or down, and notify the application when the state changes.

Doing it the managed way

There is a method in .net, NetworkInterface.GetIsNetworkAvailable(). This method will tell you whether there is a connection going out of the machine, and it seemed to be the ticket. Unfortunately (or fortunately – this would have been a really short post otherwise), in my case, it wasn’t. You see, when we’re working off a LAN, we can have local only connectivity, or low connectivity. This means that the machine can talk to the router, but cannot access the Internet at large; the method still returns true in this case. If you’re working on an intranet application, this may be ok for you, but I wanted something bigger.

Going native

My friends have a long standing joke. It goes something like, “If someone invents a piece of hardware to wipe your bottom, the Windows API probably already supports it.” (The exact quote is unsuitable for mixed company). The windows API does, indeed, grant you nearly god-like powers over your system, and nearly anything Windows can do, the API can too. Since Windows can tell when there is no connectivity to the web (the dreaded “local only” icon), I figured that PInvoke.net would be the next port of call in my search. Sure enough, wininet.dll defines a function InternetGetConnectedState

Ouch. We still have the same issue… it doesn’t really care about the “local only” business. At least, it gives us some additional information, such as whether the connection is through a modem or a LAN.

Hackety-hack-hack time

I’m still fairly confident that there’s a direct way to get to this information, but since I just want to get this to work right now, I’ll fudge it up a bit.

Using the wininet.dll function, we can tell whether we’re connected to a modem, connected to a LAN, or not connected at all. Not connected means just that, so that’s easy. For modem connections, I’ll assume that it’s either connected to the Internet, or not connected at all (You’ll have to excuse my blatant ignorance of networking hardware. Drop me a comment if you know this is the case). This just leaves us with the LAN condition to deal with.

The machine that goes PING!

When we know we’re connected to a LAN, but not how far we can go, we have to resort to the time honoured mechanism of the PING. All hail the mighty PING.

A ping is simply a very short message that’s sent to the server. If the server accepts the ping, it just echoes it back. It’s a protocol for machines to determine if they can see each other over a network. In .net, the ping is represented by a Ping class, and returns a PingReply:

   1: if (isLan)
   2: {
   3:     PingReply reply = pingOfLife.Send(PingTarget, 3000);
   4:     return reply.Status == IPStatus.Success;
   5: }

This sends a ping to a URL identified by the PingTarget property, and waits up to 3 seconds for a reply. The URLs for Ping don’t take a protocol specifier, so you’d use, say, www.google.com rather than http://www.google.com.

You will notice that in the sample code and the example above, we’re only assuming a successful connection if the reply to the ping is successful. This is a simplification. In reality, the ping is checking whether we can access the ping target, irrespective of whether the rest of the Internet is accessible. If the target server is down, you’ll get a “No connection” result.

When you think about it, it doesn’t matter in most cases. You only care about whether your application can reach its server, not whether it’s got access to random web pages.

What else?

demo

This seemed to sort it out for me. You can find the entire source code in the sample project. The ConnectionState class in the sample also contains events you can hook into so your application will get notified of state changes.

Download sample code here (New window)


kick it on DotNetKicks.com

Should HTML be considered as a data format?

In a short, but thought provoking post, Bertrand Le Roy asks whether HTML has evolved into a purely data carrying format, which is what, after all, it was meant to be in the first place.

Unless I missed the point totally, this is, in fact, the general direction of the XHTML strict specifications. With both appearance (via css) and behaviour (go jQuery!) being decoupled from the data, the contents of the html file represent only content, which is, in my opinion, Web Dev Nirvana. We cannot really have semantic html (as I understand it, this simply means that elements only ever describe what their content is, not what it does or looks like) until we have this separation.

In an aside, Mr. Le Roy says “and if we can ignore the huge majority of existing contents that is less than ideally written”. Thinking about it, it’s not as up-in-the-air-idealistic as it sounds. It’s true that no browser or device intended for browsing the net can afford to ignore 99% of the content out there, but we already have the tools to ignore this: Doctypes. Internet Explorer and Firefox already enforce different rendering rules based on the doctype for a page; if we had an XHTML SERIOUSLYSTRICT doctype, a reader could easily offer the basic or limited functionality to that (or even none at all) and give the really strict documents the real deal.

To cap it off; I don’t think we’re there yet, but we’ll get there eventually, especially once the beefier CSS3 selectors get more widespread support. Once we have those, we can even do away with the class attribute and specify appearance in a purely declarative way, turning our html into pure content.

We’ve already (mostly) moved on from the dark age of blink and marquee tags, but to be honest, we don’t need to remember the bad old days of nested tables and abused markup; we are still in the bad old days of nested tables etc. It is always good to see people like Mr. Le Roy keep the torch going forward.

FLIRt – A WordPress plugin

The inevitable disclaimer:

This WordPress plugin was thrown together in a fit of boredom by someone who is only vaguely aware of how the API is supposed to work, and whose last real contact with PHP development was, oh, so many years ago. If you want a proper FLIR plugin for WordPress, 23systems have the real thing in the WordPress Plugins directory. The one described here is for the few, the chosen, the band of brothers (and sisters) who want a daft excuse for a plugin to take apart.

Download FLIRt here

A couple of days ago, She Who Does Web was telling me about various ways of making a web page look a bit less crap, and the difficulty of making things stand out better when the only things at your disposal are about three fonts and a bit of chewing gum. sIFR was mentioned briefly, but being a grumpy old hack, I can’t make myself plonk a Flash movie into a page for the sole purpose of displaying some pretty fonts. I said so, and then I was told of FLIR. No, I am not making these acronyms up.

 

What’s a FLIR and what do I do with it?

FaceLift Image Replacement is a collection of JavaScript and PHP functions which let you mark text based elements in your html. The JavaScript parts pull out the text, and send it to the PHP part on the server, which turns it into a spiffy image of the same text, with different fonts. This image is then returned to the browser, and is displayed instead of the original text. This lets you have nice headers, without having to load up a paint application every time.

before_and_after

In the image above, the only difference between the two screenshots is that one was taken with the FLIR plugin disabled, and the other one was taken with the FLIR plugin enabled. No content, markup or style was modified between the two shots. The font used in the second shot is, incidentally, Koczman Bálint’s “Capture It” from DaFont.

 

So why a WordPress plugin?

Why indeed, especially considering there’s already a very good one out there? Simply put, I wanted to fool around with the WordPress API for a bit – I’ve used it as a blogging service for over a year now, but I never actually tried to see how it works, which is kind of sloppy for me. Combining the two things seemed to make sense at the time.

 

Ok, so where do we start?

I’m working off WordPress 2.6.2, so if you’re interested, grab a copy from wordpress.org. My local installation lives on Apache. Not being naturally disposed to fudge around with configuration files, I downloaded and installed XAMPP for Windows, and it’s working pretty smoothly so far. I also downloaded the latest release of FLIR. This has some requirements of its own, but the default installation from XAMPP is enough for the basics. In my case, I dropped WordPress and FLIR into the htdocs subfolder of XAMPP, and named them “blog” and “facelift”, respectively.

 

Writing a plugin

Once everything is up and running, create a subfolder in the wp-content\plugins folder in wordpress. This folder should have the same name as the plugin. While the WordPress documentation says that the folder is optional (you can put the plugin directly in the plugins folder, like the hello plugin packaged with WordPress), I strongly suggest you make one – plugin files can grow pretty big, and you’ll find yourself wanting to break them up pretty soon. Once we have a folder, we can create a plugin file. This will be a PHP file with the same name as the plugin. To begin with, let’s add some metadata to it so that WordPress can see what the hell it is.

Plugin meta-data is written in the form of a structured comment:

 

<?php

/*

Plugin Name: FLIRt

Plugin URI: http://no.place.yet

Description: FLIR (FaceLift Image Replacement) helper for wordpress.

Version: 1.0

Author: Karl Agius

Author URI: http://karlagius.wordpress.com

*/

?>

 

The above gives WordPress enough information to display the following in its plugins page:

plugins page

As you can see, the engine parses these comments and forms a plugin entry in this page. The Plugin URI is used to create a link off the plugin name, while the Author URI is linked off the name of the author (bet you saw that one coming). If you can’t be bothered to enter all the fields, only the Plugin Name is required.

 

So far we have a plugin that does absolutely nothing. That’s all very Zen, but also very useless, so let’s make it do something. Create a PHP function in the plugin file – any old function will do, as long as it writes something to the output; even:

<?php

function Something() {

    echo “Hello world!”;

}

?>

Once we’ve figured out what we want to do, we now have to tell the engine WHEN we want to do it. This is where actions come in. We need to register the function with an action – think event handlers, and you’re there. To register a function, we use the add_action function, like so:

<?php add_action(‘wp_footer’, ‘Something’); ?>

The first parameter is the action we want to hook to; in this case, the wp_footer action. The second parameter is the name of the function we want to hook. Once this line is executed, WordPress will know that it has to call this method when the wp_footer action is raised. Exactly when this happens is up to the template, which would usually include a call to wp_footer() – it doesn’t matter to us at this stage, since all this is abstracted from us. All we need to know is that, when the action is triggered, we’re going to run. If we activate the plugin now, and view the page, we should see the effects of our little plugin somewhere in there.

WordPress has a ton of actions you can hook to, and then some. The online documentation has a full list, so pick the one that makes most sense for your plugin, and hit it.

 

Closing off

I’ll be writing more on this subject tomo… um, soon… uh, no, make that eventually, to discuss how to set up a database table when the plugin is first activated, and how to create an administration page. In the meantime though, don’t be shy, rip the sample to shreds and see what makes it tick. Should you try to build it up on your own though, let me give you some advice: $wpdb->show_errors(); is your friend. Learn to love it, it will save you hours of frustration. See ya all whenever.

Download FLIRt here

Follow

Get every new post delivered to your Inbox.