Monday, February 1, 2016

Service Workers will Fundamentally Change the Web

Prior to 2005, the web was a pretty frustrating place compared to the world of native applications. There was no reliable way to have your web app communicate with your backend without having to resort to full-page refreshes, each of which was a complete server round-trip. Something as simple as filling out a form was an agonizing experience. Building something as advanced as Google Docs? Forget about it.

Along came AJAX

A technology known as XmlHttpRequest (or more commonly called AJAX) changed all that. Something that Microsoft had innocently built back in 1998 and added to IE5 in order to support Web Outlook was suddenly being supported in Safari and Firefox, and one morning the world woke up to the magic of Google Maps - a highly interactive mapping app that seamlessly worked in the browser, no page refreshes needed.

Consider how much the web has changed since then. AJAX is now a key part of pretty much every popular web app, and its usage is so ubiquitous that you don’t even think about it. It has so fundamentally altered what’s possible on the web that hardly anyone builds native desktop apps anymore.

The Next Revolution Begins

Ten years on, the web is on the verge of going through yet another one of these transformative moments in the form of Progressive Web Apps, built on top of a key technology called Service Worker. The addition of this technology will have as much of a profound effect on how web apps are built as AJAX did 10 years ago.

Service worker’s key breakthrough is conceptually simple: allow JavaScript code to be "installed" directly into the browser itself, independent of any single web page. That code can then be executed even when the originating web site is not active in any open browser tab, in response to events such as network requests, push message events, and more in the future.

This one fundamental change enables entirely new scenarios:

  • Reliability: Because the service worker can intercept network requests from the browser, it becomes very easy to build web apps that work when there’s no network connection or (more commonly) when the connection is spotty
  • Native behavior: Browsers can run service worker code even when the associated site isn’t loaded, enabling web apps to implement features like push notifications the same way that native apps can
  • Powerful caching: The service worker API includes a Cache API that can cache data as well as the app’s HTML. This enables the app to cache its UX shell locally, enabling an “instant-boot” experience just like native apps
  • Background operations: service workers can register to receive events when the network becomes available, enabling any offline work the user has done to be synchronized back to the server

When you combine service workers with other technologies like the Web App Manifest, your web apps can do things like have a rich homescreen presence on a device, control the app’s layout orientation, and specify a “splash screen” startup experience, giving your app the look and feel of a native one.

It's Already Happening

Some savvy developers have already started to take notice: Facebook is using service workers to enable push notifications on their web app, and Flipkart (India’s largest shopping site) has created an amazing web experience.

These technologies are already supported today in Chrome (both desktop and Android), Firefox, and Opera. Hopefully, Microsoft and Apple will soon follow.

That doesn’t mean, however, that you can’t start using these technologies right away. Service worker follows the principles of progressive enhancement: you build your web apps such that when service worker is present, then users get the benefits, and your default experience when it isn’t. This is the same path that AJAX took when it was first introduced, and there’s no reason why service worker should be any different.

The next ten years of web app development is going to be even more exciting than the last ten. To learn more about service workers, check out:

Happy Coding!

Monday, September 8, 2014

Can you trust your API provider?

In just a few short years, the number of 3rd-party APIs on the web has exploded. Everywhere you look, it seems like just about every important service and app these days has an API available for developers to tinker with.

But can you trust them? How do you know that an API provider is serious about their supporting their developers and won't change things on a moment's notice?

I've spent the vast majority of my career - almost 25 years now - building and promoting developer platforms. I even had my own software company for a while where I was the consumer for an API platform that I myself had helped build, so I've been on both sides.

In this post, I've put together a series of questions that you can ask yourself before you fully commit to an API to try and gauge how much you can trust it - and the provider. Now, I'm going to invoke professional courtesy here and not call out any third parties by name since I work for Google, but I will use some instances from my personal experience to show what I feel are good examples of API stewardship.

An API that doesn't directly support a provider's revenue or strategic goals can be more easily curtailed or changed by the provider if it begins to go in a direction that they don't like.

1. Does the API directly support the provider's revenue stream or strategic product goals?

This criteria is important, but not always easy to assess. Let's take a few Google examples for illustration (again, full disclosure here - I work for Google). The Drive API directly supports Google's main goal for Drive - that is, to increase the adoption of Drive. To that end, the Drive API gives pretty deep and wide access to the service, and makes it easy for developers to integrate directly into the Drive UI itself. Google even calls out apps that use Drive in the Chrome Web Store.

Another good example is Apps Script, the API for Google Docs. Here again, the main motivation for Apps Script is to enable developers to create features that Docs itself doesn't yet provide or address scenarios that are too niche for Google to spend much time on, thereby increasing the appeal and adoption of Google Docs.

In both of these cases, the API exists to further a strategic goal - the adoption of the product in question. As such, developers that adopt the API help further the strategic goal, ensuring that the interests of both the developer and the provider are aligned.

An API that doesn't directly support a provider's revenue or strategic goals can be more easily curtailed or changed by the provider if it begins to go in a direction that they don't like.

2. Does the API give you access to "the good stuff"?

This is also a pretty good indicator of how much you can trust the API. If an API only provides superficial access to data or capabilities that really aren't that valuable, then you have to wonder about the intentions of the provider or what they might do in the future.

APIs that give rich, deep access to the important features of a service or product indicate that the API provider cares enough to think through how the API will be used and to devote engineering resources to the development and maintenance of the API.

An API that only gives you the ability to develop ancillary, convenience features - what I call a "vanity API" - is a pretty good signal that the provider doesn't have a big commitment to the API or to its developers. It's a lot easier to close off an API that doesn't have deep hooks into the underlying service than it is to change one that's going to affect a lot of customers down the line.

Even if the provider doesn't close off the API, the risk is that you're just serving as a free R&D resource for them. If you make a convenience feature that's popular, there's nothing stopping them from just copying it and rolling it into their core service. Caveat developer.

3. Is the API being actively developed, with a vibrant community?

An API that is receiving regular updates and has a large, active community on Stackoverflow or other forums is generally going to be more trustworthy than one that doesn't.

Regular updates that include new features, samples, and bug fixes demonstrate responsiveness and attention from the provider, and an active community shows that there's a well-worn path where you can get support from sources other than the API provider.

4. Are you able to make money using the API, without the provider interfering?

This one is huge. If the API provider puts unnecessary or onerous restrictions on how you're able to charge for your product, then that's a big red flag on how much you can trust the API.

QuarkXPress sold for $895 at retail (back in early 1990s dollars). I kid you not, we actually had a plug-in developer who charged $75,000 for their product (now, obviously, this was not a consumer-facing thing. In this case the product was for large Atex pre-press systems). Did we care? Hell no. We thought it was awesome that someone was able to sustain a business like that on top of our platform. Of course, most of our developers charged a lot less, but the fact that someone was charging 100x our product's price was not a problem for us.

Now, you might have to upgrade your API usage to a paid subscription in order to make your own product charge-worthy, but that's actually a good thing - a provider that charges for enhanced API service is entering into a contract with you, which in turn makes it harder to change or discontinue that API.

5. Does the API Provider itself use its own API for shipping features?

This criteria is a bit asymmetric - if they don't use their own API for their features, that's not necessarily a bad thing. But if they do use it for their own features, that's a powerful indicator that they've got a big incentive to invest in the API.

When I worked on Dreamweaver and then later on Microsoft Visual Studio, one of the things we were most proud of was the fact that we used our own API to deliver new features in the core product. It sent a strong message to our ecosystems that we cared enough about our API that we used it ourselves and considered it solid enough to deliver core features to our users.

6. Does the API Provider actively promote its developers?

Again, this one isn't necessarily a very strong indicator, but it helps to know that if you build a product that's based on an API that the provider will be there to help you promote it.

On the other hand, If a provider is squeamish about promoting its developers, that can be a bad sign. Ideally, you want to have a company that cares enough about the success of its API that it isn't afraid to call out developers who build great products on top of their platform.


Third-party APIs can be a great way to leverage the features of services that you don't want to have to build yourself, allowing you to focus on the value proposition of your product. Relying on them, however, introduces a certain level of risk. Hopefully, by asking yourself the questions that I've posed here in the post, you can minimize that risk and build a successful product.

In short, look for:

  1. APIs where your interests and the provider's are aligned
  2. APIs that give you access to the valuable, interesting stuff
  3. Active development on the API with a great, supportive community
  4. The ability to make money without draconian interference from the provider
  5. A provider that uses their own API for new features
  6. A provider that isn't afraid to promote great developers and apps

Now, to be clear, the answers to these questions aren't foolproof, iron-clad guarantees that you can trust an API to be there over the long haul or that the provider won't change things (or rip out the rug from underneath you). But if you think about these issues up front, they can help you avoid relying on API services that one day suddenly change their mind about your value to them as a developer.

Monday, June 9, 2014

Building a Customized News Service with Google Apps Script

A big part of my job involves staying on top of current technology events and news. There are several technology-related news sites that I read daily, and typically I scan them first thing in the morning for anything I consider to be important before reading other items of interest. It’s a bit tedious to open each site and read all the individual stories for the ones I care about, and even the blog readers I use aren't well-suited for a quick morning summary of important items.

I've been playing around with Google Apps Script for a while now, and I’m pretty impressed with the versatility and power available via the APIs coupled with the simplicity of creating and running scripts. So, I decided to see if I could write a script that would scan the news sites I read each morning and deliver a customized news feed to my Gmail inbox, along with a summary of my day’s calendar. I was originally going to try the same thing with AppEngine and cron tasks, but it turns out Apps Script is more than enough to handle this job. I’m pretty happy with the results, and in this post I’ll discuss how I created the script.


The script itself is pretty simple: it scans a set of news feeds, creates links to the stories, scans the headlines for specific words, and calls them out separately from the rest of the news. It also reads my calendar for the day’s events and tells me if I have any upcoming events that I have not yet RSVP’d to.

Getting started with Google Apps Script development is pretty easy; just make sure that you have a Google account, are signed in, and visit

To get this script to work, there are a couple of things you'll need to do:
  1. Enable the Calendar API. From within the script editor, select Resources -> Advanced Google Services, then enable Calendar API.
  2. You also need to enable the Calendar API from within the Google Developers Console. There's a link to do this in the dialog in step 1.

What It Looks Like

You create scripts using the Apps Script IDE, which is of course hosted in the cloud. It's not exactly Eclipse or Visual Studio, but it's very functional and gets the job done:

[click image for a larger view]

When the script is run, it delivers an email to the address of whoever the person running the script is. The screen shot below shows what the resulting email looks like:

[click image for a larger view]

At the top, I have my daily events including ones that need my response, followed by my curated top stories, then the rest of the news items.

The script relies on some global settings to work, which I've listed here:

The main function of this script is called deliverNews(). It looks like this:

The deliverNews function does several things:
  1. Reads the Calendar for the day's upcoming events
  2. Checks the week ahead to see if there are any events I have not RSVP'd to
  3. Reads the news feeds and builds the Top Stories list
  4. Puts the rest of the news stories into their own sections

Reading the Calendar

Let’s start by looking at the code to generate the calendar summary. The Google Apps Script API has an advanced service called Calendar. This service is a very precise wrapping of the actual Calendar REST API itself.

To get the events for the upcoming day, I use the Calendar service to retrieve the events using time bounds and a setting to expand recurring events into single instances:

The getEventsForToday() function returns a list of Event resources. I then just need to scan each event for the title, URL, and date and build the resulting list:

You’ll probably notice that I wrote my own convertDate function to generate a Date. Why? Because Apps Script returns dates as ISO Date strings. In ECMAScript 5, you can pass ISO Date strings directly to the Date() constructor to create a date, but as of this writing the version that Apps Script appears to be using doesn’t support this, so I made my own:

Next, I need to find events that I have not RSVP’d to. I use the Calendar service to retrieve events for the current day, then scan each one to find the attendee record that corresponds to my email address. If found, I check the responseStatus feed, which will be "needsAction" if I haven't responded to it:

Getting the News

There are two parts to this portion of the script - one is getting the news headlines and generating the categorized results, the other is extracting the headlines that are important and elevating them to the Top Stories section. To do this, I use the UrlFetchApp service to retrieve the data content, then the XmlService to parse each feed.

The code uses UrlFetchApp to read each feed and XmlService to parse the result into a document tree that I can use to extract each item’s title and link. Right now, the script only parses RSS 2.0 feeds, but since practically everyone supports that format, it’s what I decided to code to. Adding support for ATOM would be easy enough as well.

Now, by itself, this would result is a pretty large set of headlines, and I don’t want to have to scan each one to see if it is important. Instead, I have the script look at each headline for a set of keywords that I’ve defined. If a headline contains a keyword, then that headline is put into a special array. If the headline didn’t contain a keyword, it gets added to the feed’s category normally.

Sending the Email

Now all that’s left to do is send the email. The GmailApp service is used for this, and it provides the ability to send an email with HTML content. Let's revisit the last part of the deliverNews function:

At this point in the script, the string variables calEventsStr, topStoriesStr, and feedsStr contain the HTML code for each of the sections. All that’s left to do is put them together into one result and send that content via the GmailApp.

Automating the Script

Now I have a script that can be manually run whenever I want to generate the email, but that’s obviously not ideal. What I really want is a script that runs for me on a pre-set schedule. Google Apps Script makes this possible by enabling what are called “Triggers”.

I want my email delivered to me every morning before I wake up. To do so, under the Resources menu, select “Project Triggers”. This will bring up a dialog that lets me set a timed trigger:

[click image for a larger view]

In this case, I have my Trigger set to run daily between 6 and 7 in the morning. That’s really all there is to it - you just choose the function you want executed and when you want it run. Apps Script does the rest!

The full script code is available here if you want to download it and modify it for your own needs.

Monday, March 10, 2014

Are Chromebooks the Future of Computing?

I was recently pointed to this article on Tech Republic, which asks if Chromebooks are the future of computing. I’ll come back to that question in a bit, because I want to address the larger, more encompassing one first - are lightweight, web-centric operating systems the future of computing?

Haven’t we been here before?

Let me preface the rest of this post by saying that I’ve been in this industry for some time now - almost 25 years. I can remember when computers were machines that you had to get into via a terminal, such as VT100, often over a dial-up line. Then, along came the desktop computer, popularized by the Apple II and IBM-PC, and the tech press of the time trumpeted that individual personal PCs were the "computers of the future". And for a while, the pendulum swung away from mainframes, and personal PCs ruled.

And boy, did they rule. Just about every electronics company had their own PC: Texas Instruments (Bill Cosby was even their spokesman), Atari, Panasonic, Commodore, Acorn… the list is too long to enumerate here. Eventually, though, there was a shakeout, leaving just Apple and the PC clone industry.

Then the web came along, and the pendulum started to swing back toward server-centric computing, with the client (in this case, a browser) playing a lesser (though still significant) role in the user's computing experience. The idea of having apps and data accessible from almost anywhere once again became vogue, and we even had a massive investment bubble to go along with it.

Here we are once again though, with personal devices delivering a very rich, local computing experience while connected via a cloud-backed service. Today's devices, whether they are iPads, Android tablets, or laptop PCs, work great when offline but truly shine when connected.

That, I think, is the far more important lesson to be learned here rather than whether one platform or another is the "future of computing." The key thing you need to focus on is the trend, not the current state of where things are.

What Do "Most People" Really Need?

Stop and think for a moment about what tablets (whether iOS or Android) do and how they work. They don't have a generally accessible file system in the sense of normal PCs. In fact, they don't have several of the characteristics of what you would call a traditional fully-featured computing device. Most of them aren't expandable, they don't have as much RAM as traditional laptops, there's no mouse, most don't have a physical keyboard, and yet these things are flying off the store shelves. Why? Because they perform almost all the functions that most people need on a daily basis, and they are great second- and third-devices to complement a home's primary computer. They send and read email, play games, take photos, play videos and music, compose documents, etc.

Now, wait a second - don't most of those characteristics sound like Chromebooks? The only difference was that most apps were Web apps, but that all changed last September when Google added the ability to install local, offline-capable native apps on Chromebooks.

When Chrome Apps happened, the last real remaining barrier to Chromebook adoption fell away. The Web is always introducing new features, and since Chrome updates every 6 weeks, a Chromebook owner always has the latest runtime and all the new features to go with it. Developers don't need to wait very long for end users to have the capabilities to run the latest apps, and since Chrome also runs on Windows, Mac, and Linux, the addressable market is huge.

Back to the question

So, are Chromebooks the "future of computing"? I don't know, and I don't think anyone else knows for sure, but what I do know is that consumers have clearly voted in favor of computing that is simple, secure, always-up-to-date, and works wherever they are. Tablets clearly fit that bill, as do modern smartphones, and increasingly, so do Chromebooks.

"Ah", you say, "but didn’t we go through the era of the netbook once before?" My answer to this is simple - Chromebooks aren’t netbooks. Again, let's go back to what made the iPod, then the iPhone, then the iPad super-successful: the ecosystem. People don’t buy "devices" anymore. They buy ecosystems. Consumers want to know that if they buy a particular platform they won’t need to worry about whether it will be supported or whether they can get what they need for it - both in terms of software and accessories.

The main problem that netbooks had was that there was no ecosystem - there was no central software store, no predictable update schedule, no place to bring your device when it was on the fritz, and your friends had different machines so it was hard to share their content.

Chromebooks don't have any of those problems. There's a full, rich ecosystem in the form of the Chrome Web Store, the Web itself, and even Google Play (Chromebooks can play all your Play content like movies and music). There's a set of big, established players supporting them. Content is now vastly standardized, and all your favorite Web apps work just fine (Chromebooks even support Flash, without the security hassles).

Now, sure, they aren't for everybody - but neither were the earliest personal PCs. You had to be a pretty savvy user back in the days of the IBM-PC and even the Apple II. Chromebooks don’t yet fit the bill for people who need to do heavy development or who need to run complex systems, but all of that is just a matter of time. Remember - a lot of the criticism you hear about Chromebooks was similar to what was leveled against Gmail ("no enterprise would ever use that!") and Android ("that ecosystem is way too fragmented!") and Google Docs ("it’s not Office!").

The future of computing is one in which the user is the center, not the device. Operating systems that get out of the way, not ones that insert themselves between you and your work. Simplicity, not complexity. Security, not anti-virus programs. So far, Chromebooks fit all of those. But even if Chromebooks themselves turn out not to be the "future", the directional trend itself is pretty clear.

Don’t be the frog in the pot

Before closing, I want to remind everyone about the parable of the boiling frog. The story is simple, for those of you that haven’t heard it - essentially, if you put a frog in a pot of boiling water, it jumps out right away. But heat the water gradually enough, and the frog will slowly boil to death, unaware of the drastic changes in its surroundings because of the slow nature of the change. The story is somewhat apocryphal, but it’s an apt metaphor for people who aren’t willing or able to notice the changes around them and adapt.

With that in mind, consider the following timeline:

  • 4 years ago, Chromebooks didn’t exist.
  • 3 years ago, Chromebooks were just a curiosity. Back then they didn’t even have a proper desktop or windows - only browser tabs. Many people wondered what they would ever be useful for.
  • 2 years ago, Samsung’s Chromebook suddenly started dominating Amazon’s top-seller list, and Google announced that over 2,000 school districts had adopted Chromebooks. People said "well, for some consumers they make sense, and schools clearly benefit from these devices, but they’re not for mainstream users."
  • Within the last year, Chromebooks were in over 5,000 schools, the Pixel was demonstrating what a high-end Chromebook could be, there were 3 new OEMs making Chromebooks, and you could buy them in a variety of mainstream stores. Oh, and Microsoft finally took notice and decided to start advertising against them.
  • Today, 6 of the top 15 best-selling laptops on Amazon - almost half - are Chromebooks. Every single OEM that makes a PC has now signed on to produce Chromebooks. Even HP, whose CEO Meg Whitman has said "Chromebooks have surprised us", is now making two models (so far!). There are new Chromeboxes on the way, and even enterprises are starting to replace their existing PCs with ChromeOS devices. The Chromecast, which runs a simpler ChromeOS, has literally taken off.

Focus on the trend, not the current state of the world. The water is starting to boil, folks.

Monday, November 4, 2013

Tools for Developing on ChromeOS

I've been using my Chromebook Pixel as my primary machine ever since I got it for most of my work, with one crucial exception - development. Whenever I needed to do more than just edit a couple of individual files, I had to turn to my MacBook Pro with its trusty Sublime text editor and ability to run things like Node.

I'm happy to report, however, that things have changed dramatically just in the last several months, and it's now easier than ever to do real development on your Chromebook. In this post, I'm going to give an overview of some of the tools that I use when developing on my ChromeOS devices (In addition to my Pixel, I also have an HP 14, a Samsung 550 in developer mode, a Samsung Series 3, and an Acer C720).

Keep in mind that some of these tools are fairly new and still developing, so keep an eye on them as they get better over time. Also note that all the tools I talk about in this post are available for all the supported Chrome Apps platforms (Mac, Windows, Linux, and ChromeOS), but I'm focusing just on ChromeOS here.

The Basics - Offline Editing

Let's start with the basics - editing code. Obviously, having a great code editor that works locally on offline files is a prerequisite for any serious development on any computer. For ChromeOS, we have a couple of good options here.

Text App

Until recently, my go-to text editor was - not surprisingly - Text. It's pretty straightforward, works offline, has support for syntax highlighting, syncs with Google Drive, and can easily handle multiple open files.

Screen shot of the Text editor app from the Chrome Web Store
The Text app from the Chrome Web Store
As of this writing, though, Text has a few shortcomings. It doesn't remember the files I had open between sessions, it doesn't remember the window position or size, and it's not very customizable. All that said, though, it's a good editor for what it does, and I still use it all the time for various text editing tasks.


I'm a pretty big fan of Sublime Text, and it's one of the things I missed most when working on my Chromebook. I recently came across a new Chrome App called Caret that does a great job of code editing and gives me a user experience that's much more like Sublime:

screen shot of the Caret text editor application
The very Sublime-like "Caret" editor

Some of the things I really like about Caret are:
  • It remembers the files I had open between run sessions
  • It remembers the size and position of the window
  • Support for multiple carets and other Sublime-like features
  • Lots of color schemes, support for font sizes, etc.
  • Code completion
  • Very customizable - lots of user-editable settings
  • It's open source and has a Github project
Between these two tools, I think local code editing is pretty well covered. Caret in particular is in active development by the developer, so I'm looking forward to that app getting better over time and incorporating new features as they become available in the Chrome Apps API.

md everywhere

In addition to editing code, I find myself creating and editing Markdown content quite a bit, since it's how ReadMe files and such are represented in Github. 

The app that I usually reach for when I need to do this on my Chromebook is md everywhere, which - you guessed it - runs as an offline Chrome App. It has two main components: a window for editing markdown content, and a separate viewing window to see the HTML output, which can be shown or hidden separately from the editing window.

Editing Markdown on the left, with preview on the right

It's not perfect, and has a few quirks, but it gets the job done and greatly reduces the amount of guesswork involved. There are a couple of other MD editors in the store, but md everywhere seems to be the best of the bunch so far.

Cloud-Based IDEs

One of the more exciting trends of recent years has been the emergence of cloud-based IDEs. These are full-fledged development environments that work within your browser, giving you full access to a remote development box that can run all manner of environments, tools, etc.

Several of these IDE providers have existing apps in the Chrome Web Store, such as (but certainly not limited to): Koding, Cloud9, Nitrous.IOShiftEdit, and Codenvy. There are also collaborative cloud-based editors like Neutron Drive.

Each of these tools could take up an entire post on their own, and I'm planning to do a deeper dive into them at some point, but for the moment I'm going to focus on Nitrous.


The Nitrous guys recently released their online IDE as a packaged Chrome App, so I've been using it a lot recently (which is not to say that any of the others are somehow lesser; to the contrary, they're all very good and have their own strengths). 

Cloud coding with Nitrous.IO

The Nitrous app is essentially a window that hosts the <webview> control, which displays the remote environment that you're connected to.

Setting up a remote Nitrous box is pretty easy, and you have your choice of Node, Ruby, Python, and Go environments. The Web IDE is straightforward, includes collaboration options, and lets you upload source files to the remote box. There's also a command shell for executing commands and such. The docs are pretty extensive, too.

I recently tried out Nitrous on a small NodeJS/MongoDB project, and the experience was about as easy as it could be. I just provisioned the box, uploaded some source files, and was working away within minutes. The Web IDE even provides a "Preview" function that launches the app within the local browser with the URL already set up.

Between working on source files locally and having a cloud IDE to test and deploy in, I can easily see how a setup like this could become the future of development.

Testing and Debugging

One of the greatest things about developing on a Chromebook is that you have access to the best Web developer tools in the business - the Chrome DevTools, naturally. You would be amazed at the number of people who don't yet realize this.

Chrome DevTools

Whether you are working on Chrome Apps, Extensions, or regular Web sites, the Chrome DevTools give you unparalleled features for debugging, profiling, logging, etc. Now, I'm not going to go deep into the Chrome DevTools here because all the information that you need to use them effectively is at that link that I've provided above, but here are some of the features you might not know about that you should definitely investigate:


If you do any work that requires transferring data to and from RESTful Web services (and who doesn't these days?), then you'll find that Postman is a great utility for testing REST based APIs.

Postman comes in two flavors - the original V1 Packaged App, and the newer Chrome App. I prefer the latter, since it works offline and in its own window. It also comes with a great set of online docs.

Postman makes working with REST APIs easy

Postman has some features that make working with REST services a breeze:

  • You can define collections of requests, and group them together to reflect your API
  • Adding headers and query parameters is quick and easy
  • You can set up different environments (such as "Production" vs. "Development") and then customize the way that requests are sent using variables
  • There are helpers for setting up various authentication schemes, such as OAuth2
  • The app adds each request to its History, which makes it easy to experiment
  • It is easy to customize the app's settings, and there are a lot of ways to tailor Postman to your particular needs
If your work involves REST APIs, then I highly recommend you download Postman.


Dimensions is a legacy, or "V1", Chrome Packaged App, but it's still useful and runs offline so I included it here (and hopefully the developer is working on updating it to be a Chrome App). Dimensions essentially does one thing - it shows you how your UI will lay out at different screen sizes.

Using Dimensions to view various layout sizes

Essentially, you start up the app, point it at a URL, and then slide the grab-handle left and right to simulate different screen widths. Assuming you've written your HTML to use responsive layout techniques, the page will change at various widths so you can gauge what it will look like on various devices.

Dimensions include pre-defined layout sizes for iPhone, Android, and Tablet sizes, and you can rotate each device to display different widths.

By default, the app will load in a browser tab, even if you're offline (that's what v1 Packaged Apps do). I usually set Dimensions to open as a regular window (just right-click on the app icon and choose the appropriate setting), which gives it a more app-like feel.

Useful Utilities

Editing and debugging get all the attention, but there are always those small utilities that developers use to make their lives somewhat easier when it comes to the workflow and other parts of development. Two of the utilities that I've found really useful so far are GistBox and HexReader.


I've started using Github more and more to store things like code snippets (or "Gists", in their terms) because it's really easy to access them from everywhere, and GistBox makes organizing and working with Gists simple.

You can edit your Gists directly in the app, create new ones, apply labels, search Gists, and even share them via email or Twitter.

Organizing Gists with GistBox

Unfortunately, since it relies on Github access, it doesn't currently work offline, but given its purpose and usage scenario, I haven't found a situation where that has mattered for me yet.


This one is pretty straightforward, and will be familiar to anyone who's ever had to manipulate binary files. Want to make sure that the file you're working with really is a proper PNG? Need to extract some embedded data from a PDF file? HexReader has you covered.

HexReader in action

It supports large files, you can copy data as text or hex, and it has bookmarks too.

Yes, it's a bit of niche app, and not exactly for everyone, but for those people who need it, HexReader is one of those developer utilities that you're glad is there.

Summing It All Up

ChromeOS has a come a very long way in a short period of time, and the developer story has improved right along with it. I find that I can do pretty much all of my every-day development tasks right from my Pixel (or, thanks to Chrome Sync, any one of my Chromebooks) while getting all the usual benefits of ChromeOS: speed, simplicity, and security.

And this is just the beginning - more developer tools are on their way, and the built-in Chrome DevTools continue to improve at a rapid rate.

App Links

I've included all the links to the apps I mentioned in this post here in one handy list for those of you looking to try some of these out:

Do you have any tools that you like to use to develop on ChromeOS? Be sure to let me know in the comments.

Happy Coding!

Monday, April 29, 2013

What jQuery 2.0 Means to Developers

On April 18, 2013, at long last, jQuery 2.0 emerged from Beta and was released to the public. The jQuery library has evolved a lot since it was first released, along with the way that developers use this iconic tool. The 2.0 release includes a couple of significant changes that affect how developers use jQuery, and in this post, I'll discuss what these changes mean to developers.

Dropping Support for Internet Explorer 6, 7, and 8

Many people seem to have overly focused mainly on what jQuery 2.0 does not do - that is, support older versions of Internet Explorer, 6 through 8. The jQuery team decided to split the library into two different versions: the 1.x line, which will continue to evolve and support older browsers, and the new 2.x line, which will remain feature compatible with 1.x while focusing on modern browsers.

This was absolutely the right decision by the jQuery team, for several reasons. Two of the most important reasons are:

  • The usage of IE 6-8 continues to decline, and according to the jQuery team, they were able to shrink the size of the library by 12% just by taking out the patches needed to support these older browsers.
  • The HTML5 development landscape is a lot more diverse than it was even just a few years ago, and continuing to support older browsers would have held jQuery back. I'll go into more detail about this below.

At first glance, this decision may seem a bit bold, especially since Windows XP still has a significant market share at the moment and IE8 is still in widespread use on that platform. However, since jQuery 1.x will continue to support these browsers, this essentially means that developers need to think in advance about where and how their app will be used, and consider the audience for their application.

HTML5 is No Longer Just About the Web

Notice that I didn't title this post "What jQuery 2.0 Means to Web Developers"? I did that very purposely, as a lot of development around HTML5 is increasingly no longer directly tied to Web browsers. Several application platforms have emerged that use HTML5 as their development language, and for these platforms, the notion of older browser support simply isn't very relevant.

In these and many other up-and-coming scenarios, support for older IE versions isn't needed and results in dead code that will never be executed. By removing this code from version 2.0, the jQuery team can spend more focus on modern development scenarios, which will help position the library for further growth and adoption in the scenarios I've listed above as well as newer ones that haven't even come along yet.

As a result of this change, jQuery 2.0 is just 82K when minified - and that's for the whole standard library. But jQuery introduced a modular build system back in 1.8. Use the custom build system to exclude things you don't need, and you can get it down to under 20K.

These two steps - dropping support for older IE versions and introducing a modular build system - are, in my opinion, two of the most important changes that jQuery has made in the past few versions. They make jQuery more attractive for application development beyond just web sites, which opens a whole new set of opportunities for the library, as well as additional potential sources of innovation and contribution.

So Which Version Should You Use?

The decision about which version of jQuery to use isn't really that complicated. If a project that you are working on needs to run on older browsers, then you should stick with jQuery 1.x. If the project doesn’t need to work on those older browsers, or you're building an app for one of the platforms I listed above, then you can move to the newer 2.x branch.

You can use IE Conditional Comments to make sure that the user's browser gets delivered the right version of the library by using the following code (from the jQuery blog):

<!--[if lt IE 9]>
<script src="/path/to/jquery-1.9.1.js"></script>
<!--[if gte IE 9]><!-->
<script src="/path/to/jquery-2.0.js"></script>

Of course, Conditional Comments are going away in IE 10, but since IE10 is a much more capable, modern browser, that won’t matter and the above code will continue to work - IE10 will just get the 2.0 version.

When jQuery was first released, its two primary purposes were a) to abstract away differences among browser APIs and b) make manipulating the DOM a lot easier and more predictable. Since then, it has become far more useful and versatile, and the 2.0 release takes it a further step in the world beyond the web browser.

The decision to discontinue supporting older versions of IE may seem on the surface more about saving space and focusing on modern browsers, which are of course noble goals in themselves.

The real news here, however, is that with this decision, the jQuery team is clearly positioning the library to be relevant to development scenarios beyond web applications. That's a significant step in jQuery's evolution, and I'd expect to see the jQuery team flesh this out more as these scenarios become more and more prevalent.

Monday, February 25, 2013

Rich Notifications in Chrome Packaged Apps

Several modern application platforms provide mechanisms for notifying the user of interesting events. Android, iOS, Mac OSX, Windows - they all have a fairly common, standard way of organizing notifications and displaying them to the user.

With the release of Chrome Canary M27, the Chrome Packaged Apps platform joins that club by providing the Rich Notifications API on ChromeOS and Windows (Mac and Linux to follow soon). This API goes beyond the web-standard HTML notifications system by allowing more visually rich notifications to be displayed (since the official W3C spec recently removed support for HTML-based notifications). The API is fairly flexible, allowing developers to specify one of four basic layout templates and register for events that happen on individual notifications.

Note: although the API is fairly stable and will soon be exiting the experimental phase, it is still undergoing changes. Use with caution.

The API supports 4 types of notifications:

  • Simple - an icon and some text content
  • Basic - icon and text content, and up to two buttons
  • Image - icon, text, and image, and up to two buttons
  • List - icon and list content, and up to two buttons

Chrome Notifications User Experience

Chrome Notifications are displayed according to the underlying OS’s conventions. On Windows, for example, the notification appears as a “toast” pop-up in the lower right corner of the screen:

Notifications can contain images in addition to text:

In addition to the notifications themselves, the Chrome Notifications API provides a notifications center that organizes the notifications by criteria such as date, priority, etc.

The Notifications API

The API itself is fairly simple. Notifications can be created, updated, and deleted using straightforward methods:
function create(DOMString notificationId, NotificationOptions options, 
    CreateCallback callback);
function update(DOMString notificationId, NotificationOptions options, 
    UpdateCallback callback);
function delete(DOMString notificationId, DeleteCallback callback);

A few things to note when using the API:

  1. The notificationId parameter is a string of your choosing, as long as it is unique (which is required to keep the notifications center from becoming confused).
  2. The NotificationOptions structure defines the content of the notification - its type, title, message, buttons, and so on.
  3. Some of the NotificationOptions properties allow you to specify URLs for images; they currently do not accept data:// URLs (though this is planned). In addition, you can use the chrome.runtime.getURL() method to retrieve an app- or extension-relative path to an image resource included with your package.
  4. Although the API provides parameters such as priority, the API does not make any guarantees as to how such parameters will affect the display of notifications - they are just hints to the system (for example, the OS may decide that a low priority notification merely lights up the notification center icon instead of displaying it). Currently, normal priority notifications (0) are displayed for 7 seconds, higher priority (1 and 2) are displayed for 25 seconds.
  5. The update() method can be used to transform a notification from one type into another, such as from "simple" to "image". Very cool.
  6. As with many other Chrome APIs, this one is asynchronous - it uses callbacks to indicate to the calling program the result of each function.

var path = chrome.runtime.getURL("/images/myicon.png");
var options = {
   templateType : "simple",
   title: "Simple Notification",
   message: "Just a text message and icon",
   iconUrl : path,
   priority: 0

chrome.experimental.notification.create("id1", options, crCallback);

function crCallback(notID) {
   console.log("Succesfully created " + notID + " notification");

Notification Events

In addition to the API functions to create, update, and delete notifications, your program can register to listen for notification events. There are currently five such events.

  • onDisplayed - the notification was created and displayed to the user
  • onError - there was an error with the notification
  • onClosed - the notification was closed, either due to a system timeout or by the user (and it is possible to distinguish the difference)
  • onClicked - the body of the notification itself was clicked
  • onButtonClicked - the user clicked on one of the buttons in the notification (this event fires on the notification itself, and the index of the clicked button is a parameter).

Events are registered using the typical addListener() method for Chrome events:

function fDisplayed(notID) {
   console.log("The notification '" + notID + 
       "' was displayed to the user");
function notClicked(notID) {
   console.log("The notification '" + notID + "' was clicked");
function notBtnClk(notID, iBtn) {
   console.log("The notification '" + notID + "' had button " + iBtn + " clicked");


The Chrome Rich Notifications API marks another step in the process of bringing native-app-like capabilities to Chrome Packaged Apps and Extensions. To get started, check out our example on the GoogleChromeGithub. Consider adding support for this API in your next project to give your users an experience that is more tightly integrated with their OS and in line with what they’ve come to expect from native applications!