IR5 Tech Talk: Modular Development in JavaScript

March 11th, 2013 by Todd Anderson

I recently had the pleasure to present on modular development in JavaScript at Infrared5′s bi-weekly Tech Talks. It was mostly centered around Asynchronous Module Definition (AMD) and the RequireJS library, but covered some of the history and possible future of module implementations in the JavaScript language. Dependency management and modular programming has been a large focus of mine in application development for some time, with interest in crossing implementations within many languages. As more web-based projects cropped up, I had first started looking at application frameworks that would support such a development and build workflow. This initially led me to Dojo, which I would highly recommend looking into. If you are already familiar with Dojo, then it would be no surprise that it led me to RequireJS, as it was created by the same developer – James Burke – who worked on the Dojo Loader. Eventually, it was AMD – and specifically, utilizing RequireJS and r.js – that I would incorporate into the development workflow and build processes for projects, letting the tie to any specific application framework for JS be severed. That’s not to say that application frameworks don’t have their place – especially on bigger teams. But such a discussion is perhaps a whole other Tech Talk in itself! You can view the presentation here.

Tech Talk: Creating More Responsive and Quicker Websites and Web Apps

February 4th, 2013 by Kyle Kellogg

This past Friday, Infrared5 debuted the first of its bi-weekly in house ‘Tech Talk’. I was lucky to be able to give the first talk. I chose a subject I care very deeply about: How to make websites and web apps better and with less trouble. In order for it to fit in the hour format, I focused on a few key points.

Given the ever-changing nature of the web, technology as a whole, and the way the two interact, the primary focus of my talk was how to make websites and web apps more responsive. The primary recommendations were to plan ahead for ‘N’ devices, because you don’t know how many screens variation will view the website or web app. In accordance with that, and to make websites and web apps faster, I paraphrased from Jon Rohan about how to optimize CSS for faster rendering. In order to lighten and ease the load time, I recommended using Picturefill.js. This prevents heavy assets from loading when they aren’t required by the device. Finally, I discussed how we can use the Webkit Dev Tools to improve testing.

While the talk wasn’t anything groundbreaking, it was helpful to talk about best practices and discuss these points. Here is a link to the slides presented during the talk:  http://sqow.github.com/internet-better/

We look forward to sharing our next ‘Tech Talk’ with you!

, ,

Introducing madmin: An admin console for generating mock services with RESTful URIs.

January 29th, 2013 by Todd Anderson

Introduction

madmin is a node application that provides a means to construct RESTful URIs that are immediately accessible on the server. While URIs can be defined using the command line – using such CLI tools such as cURLmadmin also provides an admin console as a GUI to aide in defining the URI and JSON response data structure.

The github repo for madmin can be found at https://github.com/infrared5/madmin

Why?

madmin was born out of the intent to minimize time spent by front-end development teams building applications against a living spec for service requirements.

> The Problem

We found that our front-end developers were curating two service layers implementing a common interface during development of an application:

  • [Fake] One that does not communicate with a remote resource and provides _fake_ data.

    Used during development without worry of remote endpoint being available (either from being undefined or no network) and can be modified to provide different responses in testing application response.

  • [Live] One that does communicate with a remote resource, sometimes utilizing libraries that abstract the communication.

    Used for integration tests and QA on staging before pushing application to production.

This would allow the service layer dependency to easily be switched out during development and deployment while providing a common API for other components of the application to interact with.

Though these service layers are developed against the same interface providing a common API, the curation of both in tandem during development can be exhaustive timewise as specifications and requirements change. When we multiplied that curation time across the numerous applications being developed, it became clearer that the fake service layer needed to be eliminated from development – especially seeing as it is not part of the release or unit tests at all.

> The Solution

Instead of defining the service layer as a dependency between these two implementations, if we could define the endpoints that the service layer communicates with then we could eliminate the need to have a fake service layer.

Just as the service references are changed from staging to production, why couldn’t we provide a living service endpoint with URIs that are being discussed and hashed out between teams. As well, why can’t we deploy that service locally and eliminate the need for a network resource to boot – we could continue our front-end development while relaxing on some remote un-connected island!

That is what madmin sets out to do.

> The By-Product

Though the initial intent was to eliminate the curation of an unnecessary service layer from front-end development, by defining RESTful URIs using madmin we were actually providing useful documentation of the service layer and opened up comminication between the back-end and front-end teams with regards to requirements and data structure.

Opening channels for communication is always a plus, and the fact that it provided self-documentation just seemed like a winner!

:: What It is Not

madmin is not meant to replace writing proper unit tests for client-side applications that communicate with a remote service nor is it mean to stand in for integration testing.

How

The madmin application works by updating a target JSON source file that describes the RESTful URIs. This file is modified using a RESTful API of the madmin server application, itself. You can check out the schema for the source JSON file that defines the API at https://github.com/infrared5/madmin/blob/master/doc/madmin-api-schema.json.

While it is possible to interact with the madmin server-side applicaiton using the command line – with such CLI tools such as cURL – a client-side applicaiton is available that provides ease of use and self-documentation.

> Usage

Full instructions on how to clone and install dependencies for madmin can be found at the repository for the madmin project: https://github.com/infrared5/madmin.

Once installed, you can start the madmin server from the command line with the following options:

$> node index.js [port] [json]

The following example shows it’s usage and the default values:

The json source file provided will be read from and modified as you add URIs in madmin. The most common and easiest way to add URIs is to use the client-side console application available at http://localhost:<port> after starting the madmin node server.

> Client-Side Console

Once the server is started, you can access the GUI console for madmin at either: http://localhost:<port>/ or http://localhost:<port>/admin, with the <port> value wither being the default (8124) or the one specified using the --port command line option.

With an empty JSON API resource file, you will be presented with a console that provides an “add new” button only:
Empty madmin console

Upon adding a new route, you are presented with an empty editable console with various parameters:
Empty new route in console.

The following is a breakdown of each section from this route console UI:

– Method –

The Method dropdown allows you to select the desired REST method to associate with the URI defined in the Path field:
Route Method panel

– Path –

The Path field defines the URI to add to the REST service:
Path panel

The Summary field allows for entering a description for the URI. When input focus is lost on the Path field, the listing of Parameters is updated and allows for providing descriptions for each variable:
Route with multiple parameters

– Response –

The Response field allows for defining the JSON returned from the URI. As well, you can choose which response to provide:
Route Response panel

In reality it will return a 200 status with the selected JSON from either Success or Error. We often supply errors on a 200 and parse the response. This was an easy way for the team to coordinate the successful and error responses that come in JSON from the request.

Viewing Route URIs and Responses

When saved, the new route will be added to the supplies source JSON file and the client-side madmin console will change to the listing of URIs defined:
Saved Route to madmin

As well, the path and its proper response will be available immediately and available to develop against.

With Error selected from the Response field:
Defined Error Response

With Success selected from the Response field:
Defined Success Response

As mentioned previously, the source JSON file provided when launching the madmin server is updated while working on the URIs. If left to the default – or otherwise accessible from the server directory – you can point your web browser to that JSON resource file and check for validity:
Updated JSON route URIs

— note —

The default admin console location can be found at the http://localhost:<port>/admin location. As such, /admin is a reserved route and can not be defined as a valid URI in madmin.

It is on the TODO list to define a custom URI for the admin portal of madmin in order to allow the currently reserved /admin.

Requirements

> Server-Side

madmin server-side application has been tested against Node.js version >=0.8.4

> Client-Side

madmin client-side application utilizes some ES5 objects and properties – ie, Object.create and Array.prototype.indexOf – and does not load in an additional polyfills to provide support for non-modern browsers.

The madmin client-side application should work properly in the following:

  • Chrome 12+
  • Safari 4+
  • IE 9+
  • Opera 12+

Grunt Integration

The madmin repository has build files for grunt with support for <=0.3.x (grunt.js) and ~0.4.0 (Gruntfile.js) and tasks for linting and testing both the server-side and client-side code utilizing Jasmine.

To run the grunt build tasks simply run the following from the command line in the directory where you cloned the madmin repository:

Depending on your install version of grunt the proper build file should be run. To learn more about grunt and how to install, please visit the grunt project page.

Conclusion

We saw a need here at Infrared5 to cut out front-end development time in curating multiple service layer implementations in order to support development efforts when resources – including server-side and network – were unavailable. The madmin application is our effort in reducing that time and extra effort and code that never saw the light of staging or production.

While doing so, we hope madmin can open up the communication channels between server-side and client-side teams in discussion service requirements and JSON data structure, all while having a living document of the endpoint URIs.

Hopefully you will find it as useful as we have!

, ,

TXJS: A Look at Javascript as a Language and Community

August 30th, 2012 by Keith Peters

Recently, Todd Anderson and I attended TXJS, a JavaScript conference in Austin, TX. I wanted to give some general feedback on the conference itself, and then discuss a bit about JavaScript as a language and the JavaScript community.

The Conference

The conference was a one-day, two-track setup with nine slots and a day of training beforehand. Each slot was 40 minutes, which in my mind is short for a technical presentation. With a full hour, you can start to teach a few concrete techniques. But 40 minutes just leaves you time to get across a general idea, suggestion or viewpoint. In other words, in a shorter session, you might be able to say WHY you should do something, but in a longer session you could show HOW. Then again, even an hour is barely enough time to teach anything concrete and many people do not do well at it. All too often I’ve found myself getting bored and looking at my watch towards the end of a longer session. Perhaps conferences need to offer a mix of longer and shorter sessions.
It was very strange to be at a conference where I was not a speaker and knew nobody except the other person from my company I came with. I believe the last conference I attended without speaking at was Flash Forward NYC in 2004. In that sense, it was kind of a relief to just sit back and go to all the sessions and take it all in without having to worry about my own talk. Much stranger was just not knowing anybody there. Todd and I mostly hung out together and talked to a few others here and there. I’m far more used to meeting up with the usual crowd that has been present at every Flash event for the last 10 years. Also, being a speaker affords you a certain amount of mini-celebrity at an event, with people you don’t know coming up and talking to you, mentioning your talk, your site, your books, whatever. It was odd to just be another nameless face in the crowd. Odd, but probably good to get that perspective now and then.

The Training

The training on the first day was an overview of the JavaScript language from Ben Alman of Bocoup, a company located right here in Boston. Ben undoubtedly knows his subject matter and it was a solid day of training, but I would say his talk was targeted a bit more towards newcomers to the subject. I picked up on several small concepts I didn’t know and clarified several others, but it was largely a review of material for me. This is not to say that a review is bad. I went through a lot of the examples and tried out various iterations and came out with more confidence than I went in with, so that’s good.

Given the nature of the training and the time constraints of the sessions themselves, I can’t say that I walked away from the conference with a huge wealth of new knowledge of specific things about JavaScript. But I did walk away with several areas sparked with interest for future study, which I am actively pursuing. One of these subjects is Node.js.

New Interests

Node.js, for the uninformed, is a JavaScript engine outside of the browser, wrapped, optimized, and enhanced for use as a server, but also useful on a local machine. Node allows you to write server side applications in JavaScript, and also allows you to write command line JavaScript programs that run on your local machine. This can be useful for testing, build processes, automation tasks, etc. This is similar to how Ruby is largely used on the server, but commonly used for local build processes and other tasks as well. I picked up a book on Node.js and was amazed that within an hour of starting the book I was writing HTTP and socket servers. Not only did they work, but I understood every line of code that went into them. Amazing. Very simple yet powerful stuff that I’m glad to add to my knowledge base.

The next area of interest that was sparked at TXJS was WebGL. This is an implementation of OpenGL, the 3d graphics library used in many computer platforms including all iOS and Android devices, but implemented in the browser with a JavaScript API. Although I wound up missing the one WebGL session at TXJS (a tough choice between that and another session going on at the same time), I vowed to do some investigation of it on my own. Still working through Node right now, but WebGL is next on the list.

Communities and Focus

I’ve been thinking a lot about tech communities lately. I got a foot in the door of the Flash community back in the early 2000′s and while I’ve always delved into other various technologies, I never got really involved in any other tech communities. Now, I’m not saying anything as inflammatory as “Flash is Dead”, but I do think the Golden Age of Flash is in the past and it’s probably not going to see the level of excitement and support it saw in its heyday. Personally, I do not have a ton of interest in where Flash’s current road map is leading it, which is what led me to become more interested in HTML5/JavaScript. But I haven’t joined the anti-Flash crowd either. We’re still pulling in lots of Flash work here at Infrared5 so I imagine I’ll have a hand in AS3 for a good long while to come.
Anyway, when I started getting more interested in JavaScript development, I naturally started looking at the JS community. Unfortunately, I think it has quite a different dynamic than what I have become accustomed to with Flash. Firstly, the JS community is probably a lot larger than the Flash community ever was. But for a community of that size, it seems like there are disproportionately fewer well known names – JavaScript celebrities or “rock stars” if you will. I think a lot of this has to do with a lack of focus in the language and development practices. Love it or hate it, Adobe (and Macromedia before it) has always been a central point around which the Flash community revolved. They’ve held the reins on the product. You’ve always known you that you’d be getting a new version of Flash every 18 months or so, and that there would be a finite number of cool new features in that update. If you’re lucky enough, you might get onto the beta. Adobe/Macromedia have historically sponsored most if not all of the major Flash conferences and the same group of evangelists and developer relations people show up to talk and listen. Moreover, Adobe has given the ActionScript language a focus, with official components, tutorials and sample code that established a set of best practice and way to code.

Contrast that with JavaScript as it stands today. There is a standards committee that can’t seem to agree on anything. If they do agree on something today, it could be literally years before it is official. Meanwhile, browser vendors are moving forward implementing features years before they are official and sometimes before they are even agreed upon. Some are even implementing their own features that are not part of any standards process.

And that’s only the core language. Ask a group of JavaScript developers about the best architecture for an application and you will start a holy war. Classes or prototypal inheritance? The best MV* framework? AMD or CJS? Forget about it. World War III almost broke out a few months ago over the question of whether or not you should use semicolons! There’s even a growing question about whether or not it makes sense to code JavaScript IN JavaScript. With a steadily increasing list of languages that “compile to javaScript” but offer different – possibly better – constructs for managing large applications, it is a valid question. But with so many of these new languages around, and no standard, we are again back into the question of which is “the best”. And the arguments ensue…

Summary

It’s an exciting time to be into JavaScript, but not an easy one. There is massive innovation happening on so many fronts. There are dozens of new frameworks and libraries coming out daily – more than any person could possibly keep up with. Standards are evolving and there is something new to learn every day. The pace of this innovation makes it hard to keep up, and the lack of focus means that no matter what you do, how you code, or which framework or library you use, there will be countless people waiting in the wings to tell you you’re doing it all wrong. My advice is to learn what you can, don’t worry about the rest, and have fun.

Keith Peters

, , ,

TXJS: How I Found a Fresh Perspective in the Heart of Texas

August 28th, 2012 by Todd Anderson

TXJS: How I Found a Fresh Perspective in the Heart of Texas

Every year, developers from all over the world make their way to Austin, Texas, to spend a day learning the latest and greatest developments in JavaScript. This year, I had the pleasure of attending the third annual TXJS. Drawn in by the promise of mid-June heat and the wild grackles of Austin, it was the wonderful venue and line-up of exceptional speakers that make TXJS an event I would highly recommend attending.

TXJS was a dual-track event, so conference-goers had to choose their talks wisely. Fortunately, the recordings will be made available online at some point. The talks I was fortunate enough to see did not disappoint. Though I picked up some tidbits about the language here and there, the real take-away from the event was a fresh perspective on developing. I will outline three rules I plan on making habit of remembering:

Test Smarter

I am a huge advocate for testing and analysis in development. Loosely tied to that is my desire for refining the build and deployment process for projects. My ultimate goal as a developer is to deliver a product that provides the best experience to the user under the given environment. I have been pursuing that goal by writing unit tests, vetting and (sometimes) writing plugins for my IDE and creating proper build dependencies and deployment scripts. In focusing on this, my comfort level in testing the experience of a deployed application in its natural environment has plateaued.

Part of this may be due to the nature of every day use of a browser, but mostly it is because of laziness and a ”good-enough” mentality. I am quite familiar with ‘WebKit Inspector’, as I use it every time I test in development and the staging-ready build, yet I have stuck to using Safari as the browser for this.I have a problem with tabs and cache when it comes to my everyday browser. By “problem”, I mean I am a hoarder. By “everyday browser”, I mean the one I check email, rss feeds, click links and (leave) open tabs in: Chrome. I have literally dozens of tabs going at a time. It is a problem. I’ll admit it. What is a bigger problem is that it was keeping me from the wonderful WebKit Inspector that Chrome provides, all because I didn’t want to clear my cache and ruin my set up.

This is where my time at TXJS started to impact my thinking. In Madj Taby‘s talk, Tools and Techniques for Faster Development, he presented useful tips and tricks in using the WebKit Inspector provided from the Google Chrome team. Taking and expanding on topics he discussed in Modern Web Development: Part 1, he showed some key features I knew I could just not live without in my testing from that moment forward, mainly the settings (settings for your inspector, can you believe it?!), more profiling options, event handler listing and expanded memory graphing.

From this talk, I realized that I should be engaging with the Inspector more. I run things through WebPageTest (which I think uses PageSpeed) and various browser plugins like Dom Monster and YSlow, but I am not engaged with the result – only re-active. I should be testing and experimenting more in the same environment that the end product will be interacted with while I am interacting with it.

Chrome Canary is now my interactive development and pre-staging deployment test browser and I am happy about seeing the light to upgrade.

Madj Taby’s talk offered some clarity for me on how I can be testing smarter, taking advantage of tools available to me to be a better developer.

Experiment & and Challenge Current Implementations

Far too often, I find myself implementing the same solution for what appears to be the same problem again and again. While there is nothing technically wrong with that approach, as long as it is tested properly and known to be a valid solution, I might be missing out on discovering an alternate path which will provide the same desired result. Even if the new path is not as straightforward, it has the potential to get me thinking in new ways about a subject I thought I had mastered..

Alex Russel‘s talk, Overconstrained: the Secret Lives of Rectangles, reminded me of that fact. One of his projects, Cassowary JS, addresses constraint-based layout for DOM elements. The methodology used in addressing the issue is a bit more involved than that, but the end result is a toolkit used to assign constraints to elements whose layout will be updated at runtime.

[NOTE: The wording in the preface to this section, in as far as weighing the validity of a new experimental path, should not be directly applied to Alex's work demonstrated. Anything I mention in this section is not intended to belie his work- it was his talk that made me realize my own short-coming in experimenting more with solutions. It's probably the talk I spoke to others most about afterward.]

I like a clean separation of layout and logic and lean towards Progressive Enhancement, staying far away from adding DOM elements and styles through JavaScript. I oftentimes find myself battling rendering engines to achieve the desired layout. Is it so wrong that I dive into scripting a layout in hopes of coming to an understanding of how the layout routines are run? It certainly is not. I don’t want to throw out my principles of application design to ‘just get it done’, but I can be more open to exploring different choices that may provide a new angle to look at the problem.

Have Fun

The title of this section is not meant to sound frivolous. In fact, the talk that drove this point home to me involved highly technical material, but it was presented in a playful manner by Heather Arthur in her talk, Machine Learning for JS Hackers, on a subject that culminated in a project that had me and many others in the room laughing along – kittydar: face detection for cats.

The talk was actually more about machine learning and perception, yet the underlying tone (at least how I perceived it) was to follow your interests and experiment. “Have fun!” Arthur is not hired to write detection scripts for social networks to send Growl notifications on incoming cat picture uploads – at least as far as I am aware. The underlying message of the talk was in machine learning, from which experimentation and exciting projects arise. The talk made me realize that sometimes diving into a subject and experimenting is meant for your own enjoyment; it doesn’t always have to tie back to some body of work.

Conclusion

Its great that I have found a method of working that is comfortable for me, but I shouldn’t stop experimenting and evaluating other methods and procedures that can help me find a solution or just having a laugh. Sometimes it takes being immersed in the intellect of others to light that fire. Thanks to Alex Sexton, Rebecca Murphy, the crew and sponsors for putting on a great event.

,