The online journey of a technophile, by Steve Brownlee

Mike Admundsen visited the Lonely Planet offices recently and spent all day discussing the current state of API development, and thoughts on what the future holds – for those brave enough to take the risk.

One thing that I walked away from the day-long event with was the thought, “Ok, so we’ve mostly been doing this for a decade and we’ve kinda painted ourselves into a corner, but there’s a few people who are trying to do it the right way. Why can’t we, today, start churning out hypermedia APIs and clients that can easily consume them?”

Well, the answer is: It’s hard.

The solution is to make it easier. How do we do that? Let’s start with how things work today.

Current Client Strategy

The way the API consumers write clients these days is based on having a large amount of out-of-band knowledge of the application that the API is exposing. For example, a client developer must read the documentation of an API to discover that you get a list of users at api.foobar.com/users with a GET. But after you get a list of users, what then?

Well, read more docs an discover that you can view, update or delete a specific user at api.foobar.com/users/1a883efc1d. Ok, that’s good, but let’s assume that there are orders, wishlists, and events that are related to users. How do I interact with those?

Off to the docs to discover that I get a user’s orders at api.foobar.com/users/1a883efc1d/orders. If I want to then display the items in the order, I need to GET that order, discover the unique identifier and then GET from api.foobar.com/orders/8ee1babc07ed which provides me with an array of items in that order.

Rinse and repeat for every single aspect of the application state that the API exposes.

First, in this workflow, knowledge of the API has to be discovered by a client developer by reading all of the documentation for each resource and URI exposed. The developer needs to then have a mental map of how all those resources fit together for their client application.

Next, those resource URIs get translated into very specific client code that responds to user interactions and then calls the appropriate endpoint. Lots of functions are written like addUser(), updateOrder(), and listOrderItems() where the client developer instructs their software to hit the right URI and use the right HTTP method in order to perform the appropriate action (i.e. change the state of an application resource).

What do we have now?

We have clients and an API that are heavily coupled. Change a URI and all clients break. Change a POST to a PUT and all clients break. Add a key/property value to one of your representations, and clients that have written API schema validators will break. We need to write clients and servers that are more loosely coupled, so that the server can own its own namespace, change URLs, and the client remains unbroken. The client should care about state transitions that are provided by the server rather than being tightly coded against URLs.

How can we change the course and have the client and server speaking the same language, while remaining flexible?

Semantic Discovery

Having written a few API clients, I felt this gap, even if I couldn’t verbalize it. That’s what Mike was able to do. He called it the Semantic Gap.

Defining what a representation… um, represents

The media type

In discussing how to deal with creating a common definition for a particular resource, the current solution for truly common types on the Web is to create a new media type (see full list at the IANA site). These include standard image types like jpeg, png and gif. There’s also standard text formats like text/xml, text/html, and text/css. These media types are a way for two systems to have a common language for understanding what is contained in a particular message.

Browser says, “The developer requested a resource at this URI and stated that it must be of type text/html. Please give me that.”

Server says, “Ah yes, I have that resource and it is available as text/xml, text/json, and text/html. You wanted the HTML version, so here you go, browser.”

Browser then says, “Thank you, old chap. Since I got the resource and the server verified that it was of type text/html, I know that I have to parse that resource and render that as an HTML document structure in the main viewport. Here we go…”

In that lovely, everyday conversation between a web browser and a server, they were able to use a third-party, agreed-upon, standard so that they both had an understanding about what kind of content was being exchanged, and the client could take the appropriate action. This exchange works fantastically for common meta-types of elements used in building things on the Web. Two systems can agree that something is a CSS file, but there’s no understanding of what’s contained in that file. Similarly, two systems agree that an image file of a particular format was requested and sent, but what that image file represents is not part of the conversation.

So what if you want to define something more specific? A true type.

I’m going to use Lonely Planet for an example here since we’re going through the process of building a hypermedia API. We deal with resources like Places, Points of Interest [POI], and Travel Services.

For example, we could expose an API resource at the URI http://api.lonelyplanet.com/poi/9cdc3ba66af, and document that the representation of that point of interest would be delivered in the format of application/vnd.siren+json. But how do the systems know that it’s a POI? Obviously, the developers of the client know it because they read it on our documentation page, but that is out of band information. It’s not inherent in the communication taking place between the two systems.

In order to have the clients that would use our API understand exactly what type of data is contained in a representation, without the need for reading documentation, we could author an RFC draft that proposes a new media type called named text/poi. Unfortunately, even this is limiting because we would still be defining format more than substance. There’s not an established understanding of what a POI is, how to let systems and user interact with it, and how we can facilitate state changes for a POI.

There’s still a large amount of time and effort needed by a client developer to build that understanding into the client after reading out of band documentation.

The semantic profile

Mike discussed a new type of semantic tool called the Application Level Profile Semantics [ALPS] that would allow n disparate systems to all automatically understand what is being represented in a message – not just how it’s being represented.

I’m about to do a terrible job of explaining how it all works, so take the opportunity, if you’re interested, to read the specification (link above).

Here’s an example from the mapping guidelines.

In this quick example, you can see how an ALPS profile is used to define a common vernacular for what is involved in search, and then an example implementation of that profile in HTML.

  1. There should be a semantic element that represents the text descriptor on the profile. In HTML, this could be a div element, but since the profile also defines a state transition, it makes more sense to implement an input field that can either be auto-populated, or accept user input. To comply with the profile, it’s given a class of text.
  2. There should be an element that represents the search descriptor and starts a safe state transition (such as a GET). In HTML, a good way to do that is a button on a form. To comply with the profile, it’s given a class of search.

Let’s make this as easy as possible

Many developers today are used to working with a semi-REST API. By this, I mean that many APIs available today provide a list of URLs that expose resources, and lots of documentation to explain what the resources are and how they related to each other (this is also called out-of-band information).

It is then up to the developer to build a tightly coupled client with all of the state transitions hard coded into the client software. The client explicitly defines what it available to the user of the client, and the server simply becomes a puppet for the client to manipulate.

A full REST API, more accurately described as a Hypermedia API, implements HATEOS in the representations sent to the client so that there’s vastly less out-of-band information needed to be consumed by the developer and hard coded into the client. However, just providing links in the server response to provide guidance for state transitions, while very important, is only one part of the solution. We have to make it easy for client developers to understand them so that clients can be written in a more intelligent manner and not be so tightly coupled. The server should be able to own its own namespace and change how state transitions occur, with the client being blissfully unaware of it.

Therefore, we need more tooling for developers to work with Hypermedia APIs. Here’s some ideas.

Idea #1: Hypermedia Explorer

How useful would it be to have visual explorer of a hypermedia API that shows nodes and edges in a link-graph style UI. It would also provide a key showing all the resources provided by the API, and types of state transitions. This would allow developers to filter the graph to find information pertinent to their current task, especially if the API is complex, with dozens, or hundreds, of resources.

Since there’s no universal format for how resources are represented, then an explorer client would need to be extensible, much like modern text editors are like Sublime Text or Atom. Developers could write their own plugins that would accept Siren, JSON+P, HAL, or Collection+JSON when working with JSON formatted representations.

Idea #2: Extensible Code Generators

Client side

If there are agreed-upon semantics governing a type of resource – a person, a book, an invoice, etc. – then client code generation can become more automated by reading a profile, and then producing some boilerplate templates that implement the profile. In addition, a sample JSON or XML representation of a resource could be produced that the API developer could use.

  • Generate an HTML form
  • Generate a XHR request
  • Generate an img or a element
  • Generate a table for listing resource properties
  • Generate data stubs in JSON or XML for use in development

Server side

I’m currently imagining that a semantic profile could be a starting point for an API developer and have some stub code generated from it. An extensible code generator could have plugins that allows the developer to (1) read a profile for a Book, (2) generates a Ruby module/method that (3) stubs out a Siren representation of a Book.

  • Plugin #1: Read semantic profile
  • Plugin #2: Implements Siren stubs of a profile
  • Plugin #3: Generates a Ruby method

I’m still thinking about all this, but right I believe the best way to go is to build an extensible framework and then allow all the smart people in the world build the needed plugins.

Again, it really shouldn’t be hard to work with an API. If we can automate the tedious aspects of developer, then we can focus more on solving problems. In five years, we should not still be having friendly arguments about how to version an API, or what media type is the best.

I want my kids to be able to build a hypermedia API that they can use to build their own applications about My Little Pony or Power Rangers. Because then, later in life if they choose to go into software development, they can focus on building products and not having the same arguments we’re having today.

Published on Wednesday, Oct 1,2014 | 0 Comments

I joined Lonely Planet in March 2014 to help with the transition from being a traditional book publisher to a digital content powerhouse. It was a tremendous risk for me because there are technologies here that I’ve never used before, and I’d be back in the business of producing code for a public site that is visited by millions of people. I’d spent so many years producing business applications for much smaller sets of customers

After speaking with the leadership team in place at the time, we agreed that we could help each other and I joined the team as a senior technologist. I’ll admit that it was a rocky start as the roiling changes to people and processes that were happening made it hard to get my bearings on how best I could help the team. Luckily, after speaking again with the leadership team, goals and plans were developed to steady the ship quickly and start moving forward as a coherent team.

Now, I didn’t join to manage a team. In fact, I wrote at the time that I didn’t want an official management position. I joined to mentor other teammates. I wanted to help other developers advance their skills without any kind of imaginary boundary between myself and them.

Then the day came when it was suggested that I become The Manager, and the other developers Report To Me. I thought about it almost constantly for two days, and had many discussions with my wife about it. Then, I agreed, and I haven’t regretted a single minute

Here’s why.

I realized that there isn’t, and shouldn’t be, a distinction between a mentor and a manager. With the way I’ve approached it, the only “manager” thing I do is approve time sheets. Everything else I’ve done with my teammates had the goal of making them better developers, more engaged and excited with the work we’re doing, and opening the door for them to become leaders themselves.

My singular goal, in fact, is to support them and be their advocate.

  • Make sure they have no obstacles in their work
  • Make sure they work together well as a team
  • Be a sounding board for their frustrations and aspirations, and help where I can with both
  • Protect them from useless/distracting information
  • Increase their skills so that they can excel, not only on our team, but in all their future jobs
  • Encourage them to get involved in the community, if they wish to do so
  • Teach them how manage time and research information

It’s a responsibility that I take very seriously, but I also try to have some fun while doing it. To borrow a quote from one of my favorite philosophers, I’d like to think that this is my ultimate goal.

“When the best leader’s work is done the people say: We did it ourselves.” – Lao Tzu

Published on Wednesday, Sep 10,2014 | 2 Comments

For the TL;DR version (which skips my long winded trip down memory lane) go directly to the Future section below.

We’ve had this dream since 1997


It’s the dirty word of the web application and services development community. We’ve got plenty of it. Now that doesn’t mean that companies and OSS communities haven’t tried to help us out. They really have. Sun, Adobe, Microsoft, and Google have all put massive resources and weight behind trying to be the One True Platform of the web.

The problem always was the money keeps getting in the way. Every vendor keeps trying to lock in the developer community to their own opinionated platform because, of course, their ideas are the best (and in many cases, just so happens to support corporate profits).

Let’s take a look back and see where we’ve been and how we’ve tried to accomplish the dream of a common platform for the web.

Java Applet (1996-1999)

Ah, the language that everyone loves to hate, right? Well, not everyone hates it but web application developers who have more than a few years experience tend to have a deep-seeded loathing of the language.

Why is that?

Well, it may have something to do with Java Applets. Now, for those who weren’t around in 1995 when the Java applet was introduced to the language, you won’t have any context for the rise and fall of this very first attempt at a common platform for the web.

Many of you have likely never even seen the <applet> tag in an HTML document before, and you might want to count yourselves as lucky.

In fact, even before that – before web application development I mean – there was Windows development. Now before the World Wide Web came along and first started its disruption of the desktop OS, Windows developers were battling (and continued to battle for at least 5 more years) something called DLL Hell. Oh, I think back on it with sentimentality now, but I still remember how frustrating it was.

Well, Java Applets became their own version of DLL Hell as some browsers would support them, and others wouldn’t. Then only certain versions of certain JVMs would be supported on certain browsers. Then, as always with a new technology, the scum of the earth found out how to exploit it for nefarious purposes, so you can stack security issues onto the pile.

Then, the first inklings of a community for the web became aware and thoughts such as, “we shouldn’t be depending on third-party plugins,” and “the web should be open and not running binary applications where you can’t see the source code” started to enter our collective consciousness.

As quickly as it burst onto the scene, the Java applet quickly withered as the savior of the web.

AJAX (2005-Present)

With the applet gone, we trudged along the wastelands of the web for a few years, making incremental advances, but nothing really revolutionary happened. Then, in February of 2005, an article was published by Jesse James Garrett titled “Ajax: A New Approach to Web Applications”. I remember the first time I read it. I was working in Pittsburgh at the time for EFI, building ColdFusion and Java applications, with JavaScript powered front ends.

We take it for granted now, but when we were first introduced to it, it was exhilarating. The dream of building true applications became a reality in our imaginations, as we were tired of the multi page paradigm we’d been handcuffed with all these years. Yes, we used frames and iframes to get around it, but XMLHttpRequest (XHR) put an end to all that. All our code could run in the same page without a secret hack we hid from the users.

Then Google Maps came out, and that started the revolution that’s still going on today. It showed the true power of the AJAX promise. AJAX continues to be one of the underpinnings of moden web application development. However, perhaps its days are finally numbered with the WebSockets API Specification rapidly gaining hold in the community and full modern browser support.

Personally, I am entranced by the Meteor framework, and my current application, stackd.io, was originally built in it. Alas I had to migrate away, but in hindsight, I might have been better served staying with it.

Flex (2008-2012)

Man, I loved Flex. For as much as it completely went against everything that I stood for – open web, open standards, and open source – it really was freaking awesome. Finally, FINALLY, we could write desktop quality applications and use the web and browser as the delivery platform.

Of course, it was Java applets all over again, but those same arguments we made to kill to applet, we conveniently forgot when Flex came along. It was oh so pretty… and shiny… and new… and we could build even prettier, shinier things with it. If we had all been 12-year-old girls, the world would have been drowned in one big squeal of delight emanating from the dungeons where they kept the programmers.

However, there were islands of light still fighting the good fight. The W3C, Apache, Mozilla, and other organizations were still pressing on to make specifications that browser vendors could implement, so that we would, once again, be freed of the shackles of corporate underwriting.

Flex is still alive under the guidance of the Apache Foundation, but it, like the applet, has been quickly shuffled off to the fringes of web application development.


As a quick footnote on the whole embedded vector graphics plugin era, Microsoft was late to game (seems to be their calling card since the Era of the Desktop) and released their competitor to Flex, called Silverlight. It arrived on the scene just in time to see the entire platform be disrupted and marginalized.

The JavaScript Boom (2010-Present)

I wrote an article on my old blog back in June 2011 entitled “The World is Changing – The New Landscape for Application Development”. In it, I described a fundmental change that I saw emerging; a change where everything would be powered by JavaScript on the web platform; a platform on which I was betting and still am. Three years my prior, Jeff Atwood proposed a corollary to the Rule Of Least Power, self-named Atwood’s Law, that proposes that “any application that can be written in JavaScript, will eventually be written in JavaScript.”

Even when I wrote my article 3 years ago, people poo-pooed the idea that any and all applications would be written in JavaScript. I believe that many of those applications have since been written in JavaScript and the list will continue to grow. It could be partly because we just aren’t able to see what advances will be made in the future, particularly if we think those changes are threatening.

Now with tools like LLVM, Emscripten, and asm.js, application developers can take traditional desktop applications, and compile them down to highly optimized JavaScript for delivery in the browser.

I’m betting on the web.


There are many parts that make up the all-encompassing HTML5 specification, but for web application development, there are a handful that will make a significant impact once adopted across all modern browsers (and many of them already are).

All these new features of web browsers will allow app developers to make richer user interfaces without having to jump through all the hoops we’ve been using for years, and will reduce traffic and memory usage.

Mobile Revolution

Not since the introduction of the World Wide Web to the average consumer has there been such a fundamental disruption to our daily lives, and the life of an application developer. The fact that we all carry powerful computers around in our pockets that take pictures, record HD videos, let us deposit checks, play good games to pass the time, keep up with our friends, provide GPS directions when we’re lost, hold every bit of information we have about our friends and family, check the weather, find a restaurant, read the news, and the million other things we can do with a computer is still astounding to me.

In fact, why we still call them “mobile phones” is beyond me. They should be called “pocket computers”, if anything. I rarely use mine for making phone calls anymore.

One thing that you might not have considered is that the recent advances in application development for the web have been driven by the mobile revolution. Responsive designs, media queries, local storage, location data… all these things are being driven by mobile revolution. I doubt many of us would have cared about those things without the existence of pocket computers.

With HTML5 and mobile development still in its infancy, we’ve yet to see the amazing things that will come out of them, and there will be amazing things; I just don’t know what they are, but I’m excited nonetheless.

What does the future hold for us?

The Future

I’m excited about the next few years of application development for the web. I’m not even talking about native mobile app development, because I still believe that the web will win. Native mobile applications will always be needed, much the same as native desktop applications will always be needed, but over the next decade, the tools and infrastructure will continue to be built to make the web the main platform of distribution for applications.

It may not seem like it now, but there’s a massive community of very bright people working very hard to make it a reality. Just because it doesn’t exist now, don’t let it limit your imagination.

Web Component Platform

I really want the web to be an open and cohesive development platform. A lot of other really dedicated and smart people want the web to be an open and cohesive development platform. But what does that really mean? Here’s what I’d like it to mean.

  • I want to as easily include and use UI components as a Node.js developer would include and use the file system – var fs = require('fs');.
  • I want there to be one component repository and management tool that handles publishing and consuming components (ok maybe there’s two to be realistic, but I’m dreaming here). Think of npm for Node developers, and bower for packages.
  • I want these components to be as granular as possible, without dependencies on any other component, parser, or generator, but still have the flexibility to have logically grouped components be available as packages. For example, if I want to include the entirety of the Angular library in my project, I should be able to do so, but if I just want to use the data binding, but not all the other features, I should be able to do that as well.
  • The components should be written in a way that they can be easily used by a build system, or task runner, for concatenation and compression.
  • The components should not include custom styling by default, but can provide some, if requested.

Why don’t we have this already? Well, the answer is simple. We just haven’t gotten there yet, but we are getting there. For me, the most important aspect of the HTML5 specification is the Custom Elements. The point of this feature is to allow web application developers to build custom, self-contained components that will be rendered right in the DOM of an HTML page.

For an example, this would be valid syntax and would render a list of bananas.

      <link rel="import" href="lib/bananaWorld/bananalist.html">
      <link rel="import" href="lib/bananaWorld/bananatype.html">
      <link rel="import" href="lib/bananaWorld/bananaorigin.html">


One of the first projects that I found that is trying to tackle this new frontier is Google Polymer. It provides a large library of already-built components that you can try out. On top of that, Polymer is intended to be a next generation framework, by implementing all of the specifications from HTML5 that it can, while having polyfills for those that it can’t.

It’s a truly ambitious project, and, if combined with some other tooling (which I’ll discuss below), my dream workflow will be a reality.


Now Mozilla, who I’m a huge fan of, has also started their own project called Brick which, while not as ambitious as Polymer, is also tackling the challenge of creating a library of custom elements for use in web applications. The reason I really like Brick is because of Firefox OS which shares my belief of betting on the web.

We’re getting very close to having a complete development environment in the browser, supporting web applications built on a common, core set of UI components that are reusable, interoperable, and composable.

It currently has components like:

Not a whole lot right now, but we are just getting started.


Back in 2012 T.J. Holowaychuk published an article about JavaScript components in which he proposed a detailed vision for delivering encapsulated components that can be used, and reused, in any JavaScript application. He elucidates how current package managers like npm and bower aren’t really fulfilling this need, and are simply buckets that hold everything from single libraries to entire frameworks that can be included in projects.

I urge you to read his entire article, but my own TL;DR version is this; stop publishing opinionated libraries and components. Make things as composable as possible, with the minimal styling possible, and as encapsulated as possible. If you want to publish a framework, it should be comprised of standard components itself that can easily be pulled out piecemeal, if needed, and included in another project.

This is a completely new way of thinking about front end development. Those who have experience in “backend” development are familiar with this style of writing software. The tenet of writing composable classes and functions that do one thing, one thing only, and do it completely independent of any other class or function is exactly what we should be doing with components.

Both npm and bower moved the entire front end software industry forward, and are integral parts of our development and ops workflows, but we can’t divert our eyes from the real goal.

Based on the vision from T.J.’s article, there are now two tools that are the move in this new direction. First is component, which is the CLI tool you can use to pull and publish CommonJS components. There’s also the component.io web site that let’s you search and browse currently available components.

This kind of tooling is what is needed, in conjunction with libraries like Polymer and Brick, to make web development more standardized, more cohesive, and more understandable. Then will come the arduous task of creating editors and IDEs that pull all this together for developers.

We still have a long road ahead of us, but at least we’re on the right road now and driving along.


There are now some sites like customelements.io that are trying to aggregate components that are being published so that front end developers can get their feet wet building application on top of them.


Another great aggregation site is WebComponents.org where you can read the specs, browse articles about custom elements, watch presentations, keep up with current browser support, and see links to libraries that are being created to leverage these new technologies.

sweet.js for those non-web type developers

If you haven’t yet, you really need to check out sweet.js. To prime you, think of CoffeeScript or TypeScript. Those syntaxes are sugar on top of JavaScript that allow you to write code in a more concise way, but then compile down to pure JavaScript for deployment.

Now, I’m not a huge fan of learning a completely new syntax for writing an entire JavaScript application because I feel it’s a level of abstraction that is too large to make up for the conciseness of the syntax. That said, I do believe that using sweet.js macros to make some parts of JavaScript application more standards compliant would be a good thing.

Let me explain that in more detail.

There are two competing standards for how to build modular applications right now – AMD and CommonJS. Now, I prefer AMD because of its pervasiveness, but I also like CommonJS because of its cleanliness. For those developers who are used to Python, or Java, or {name language here} that are used to a syntax like…

import os  
import sys  


import java.io.*;  
import java.util.*;  

We can use a sweet.js macro to write JavaScript code like this

import twitter.bootstrap.form;  
import twitter.bootstrap.button;  
import knockout;  
import sockets;  
import express.router;  
import box.DataStore;

var init = function () {  
   var CloudStore = DataStore.createCollection("cloud");

and have this produced from it

function (form, button, knockout, sockets, router, DataStore) {  
   var init = function () {
      var CloudStore = DataStore.createCollection("cloud");

In this specific case, the macro would convert those import statements into AMD style module definitions, but could also produce CommonJS style composition – whichever you use in your applications – and now the source code looks more comfortable developers from other languages, thus making it easier to understand, without having to learn a completely new language syntax on top of JavaScript. The rest of your application would be written in native JavaScript.

Betting on the web

I’ve used this catch phrase a couple of times in this article, and I’m not even sure where I picked it up, but I’m a firm believer in it. There’s a whole lot of passionate, articulate, talented, intelligent and motivated people who see the web winning for application development. Progress continues despite the naysayers. New tooling is being developed despite the short sighted. It’s the long play, and having a shared, common vision that includes making standards, fighting fragmentation, and resisting corporate self-interest will make that play successful.

It’s the long play that’s currently in its 17th year of strategy rollout. It’s had its ups and downs, but over time, the final goal has remained consistently in sight.

The future is in your hands

The future of the web is in your hands now – you front end developers who are just starting your journey. Front end development will be entering its Golden Age soon (heck, it may have already started). More companies are realizing that they need people who can make cohesive, high-performing, scalable, resilient web applications that can serve multiple delivery platforms (desktop/mobile/TV) and engage with customers.

My time will be ending sooner than later. I’ve been doing this for over 20 years now, and I’ve been spending a lot of my time lately passing down skills and wisdom to the next generation of developers. I still am deeply passionate about keeping my finger on the pulse of what’s currently available, but lately I’ve been shifting my efforts towards helping out with what may come.

It’s time to start thinking about how we, as a community, can further the goals of the World Wide Web as the greatest development platform ever conceived. It may not seem like it on a daily basis, but the choices you make today may have a deeply profound impact on how the web platform evolves. Are you going to help it or hinder it?

Choose wisely, my young padawans.

Published on Monday, Mar 3,2014 | 9 Comments

I stumbled across a great article today that described how we can customize our Chrome Dev Tools interface. Since dev tools is all HTML, CSS, and JavaScript, then all we need to do is apply some new CSS in a particular file (see article for details).

I love the Solarized Dark theme, and have both my terminal and my Sublime Text editor in tha, but for Chrome, I chose to go with the Solarized Light theme.

So here’s what my dev tools interface looks like now.
Styled Dev Tools

Published on Thursday, Aug 29,2013 | 4 Comments

No Higher Form of Praise

When people call me a geek, I take the moniker with pride and own it. I can think of nothing nobler than having a passion towards science, technology, engineering or mathematics (STEM). These are the fields upon which all of modern society is built, and all other fields obtain their advances.

I feel it is one of my core duties as a father to install that passion in my girls. I am exposing them to science, and technology at every opportunity. I also am proudly watching my oldest daughter obtain a healthy, natural talent for mathematics. My bubbly, glittery, overly-talkative 7 year old girl who was talking full sentences at 1 year old has been doing basic multiplication and division while she struggles with writing and reading. The complete opposite of what I envisioned her skillset to be.

She is well ahead of what they are teaching in class. I have been teaching her at home, and it’s a lot of fun watching her skills blossom.

Why Are Leaves Green in the Summer, but not in the Fall?

So today, we a starting the experiment that will teach them about chlorophyll and why leaves change colors in the fall.



Mashing and Cutting





Ok, it ended in total failure, and I have no idea why. I remember doing this experiment as a child and it working perfectly. I set the jars out to let the alcohol dry, but when it was all said and done, all that was on the coffee filter strips was a thin green line. No yellow, no red… nothing else.

So we’ll try again next weekend and perhaps do a better job of mashing up the leaves. I just let the girls rip the leaves apart into tiny shards, but this time I’ll get them to mash it all up into a paste.

Regardless, they both had fun with the experiment and I at least got to explain to them why the coffee filter absorbed the green chlorophyll via the alcohol.

Published on Tuesday, Aug 27,2013 | 5 Comments

About Steve

I am a technologist, and have been ever since 1980 when I got my very first TRS-80 and programmed it to do my math homework. I love to share the gift of technology with others and show them the wonderful things it can do for them, and how they should not fear it, but embrace it.
Find out more about me at Vizify....


Entries (RSS)
Comments (RSS)
beats by dr dre monster beats cuffie beats fitflop italia scarpe fitflop fitflop online ray ban wayfarer occhiali ray ban occhiali da sole ray ban scarpe louboutin louboutin scarpe louboutin prezzi peuterey peuterey outlet