Quick Recap of CSSConf 2016

I recently had the opportunity to speak at CSSConf 2016. These are some of my personal notes on the experience. For a more extensive recap on the talks, check out Miriam‘s write-up.

CSSConf 2016 sticker

The Conference

The conference was held in Boston, which was great, because it finally gave me a reason to travel there. I didn’t have much time to explore, but I liked what I saw. Nice city.

Boston at night

The talks were awesome. CSS doesn’t always get a lot of love from my programmer peers, so it was nice to be in the company of people that geek out about CSS as much as I do. Even when people disagreed on certain points, it was still refreshing that people had opinions and cared enough about CSS to disagree.

The conference took place at Laugh Boston, a comedy club, which was a fantastic choice. The casual atmosphere was a welcome change of pace from your typical conference room. And now I can say I’ve performed at a comedy club.

My Talk

You can watch my talk, Silky Smooth Animation with CSS, embedded below. Slides are available here.

The wifi went out during my talk, which taught me a very valuable lesson in public speaking: make sure that you have your demos saved locally AND that they work offline. I only had the former. Fortunately I was able to hop on the wifi from the pub next door, otherwise I would have been dead in the water.

Until Next Time

It was great to finally meet a lot of people I’d only known on Twitter, as well as a bunch of new people. The front-end community is top notch. Other talks are up on the CSSConf 2016 site. Definitely worth checking out.

0 comments » Related topics:

RevolutionConf 2016

I recently spoke at RevolutionConf in Virginia Beach, VA. It was the inaugural event for the conference, and my first time speaking at a conference, so I figured I’d write a few words about it.

RevolutionConf logo

My talk was about implementing motion detection with JavaScript. I’ve published it in article form, if you want to check it out.

Truth be told, I was super nervous in the weeks leading up to the conference. Public speaking seemed like a natural step after writing in this blog for so long, but that didn’t make it any less terrifying. But once I got to the venue, checked out the conference rooms and had a chance to meet some folks, the nervousness completely subsided. It just felt like a bunch of people, myself included, excited to share their craft and learn stuff.

Will Boyd speaking at RevolutionConf

A couple highlights from some of the sessions:

  • Kevin Jones had an great talk on the history of cryptography and how that history is repeating itself. Really interesting stuff.
  • David Bates‘s talk was titled “How to Make IoT Devices Speak with Fire”. I thought the title was figurative. No, he literally showed us how to hook up flame throwers to remote access IoT devices.
  • Brent Schooley gave a primer on using Swift, which I was happy to see. From my personal (and short-lived) experience as an iOS developer using Objective-C… Yeah, Swift looks nice.
  • Pawel Szymczykowski‘s talk embodied the sort of gleeful “let’s take this to the extreme” spirit that’s so fun to watch. What started as a little competition in Crossy Road led to him building a touchscreen-tapping robot guided by computer vision.
  • Julia Gao gave a talk on bringing functional programming into your front-end development. A few of my coworkers are crazy-go-nuts over functional programming, so it was nice to finally see what they’re going on about.

All things considered, I really liked RevolutionConf 2016. Beyond the talks, the people were cool, happy hours were fun, and the other speakers I chatted with were really supportive. I hope to see everyone again at RevolutionConf 2017.

1 comment » Related topics:

Motion Detection with JavaScript

I recently gave a talk at RevolutionConf about writing a motion detecting web app with JavaScript. This is basically that talk, but in blog form. Live demos and all the source code are available at the end of this article.

The Premise

I wanted to see what my pets do when I’m away. But I didn’t want a continuous live stream, since that would be pretty boring (they sleep a lot). Instead, I decided to build something that would watch for motion, take a snapshot of anything that happens, then upload the snapshot to Twitter for remote viewing.

Basic flow of motion detection web app

Just for kicks, I decided to do this as a web app, all in JavaScript.

Accessing the Webcam

The first step is to access the webcam. This is actually really easy with WebRTC, which is natively supported by modern browsers… unless your browser is Safari. Boo. Another wrinkle is that WebRTC has some syntax differences from browser to browser, but the adapter.js shim will fix that for you.

Anyway, to grab a stream from a webcam, start with a <video> element in your HTML.

Then add a bit of JavaScript.

This will attempt to grab a 640px by 480px stream from an attached webcam. Users will be prompted to permit access, but assuming they do, the stream will be displayed in the <video> element on the page. Check out this quick demo to see it in action.

Grabbing Still Frames

We need to capture still frames from the streaming video so that we can do motion detection (more on this later) and potentially upload them as images to Twitter. I settled on an interval of 100ms between captures, which is 10 frames per second.

We start by grabbing the <video> element with the stream on it from the page. Then a <canvas> element is created in memory, though you could also have it on the page for display.

A simple setInterval() allows us to capture a new still frame every 100ms. Each capture is drawn onto the <canvas> element by calling drawImage() and passing in the <video> element. It’s smart enough to just draw whatever is visible on the <video> element at that very moment.

Once something is drawn on a <canvas>, you can save it off as an encoded image string. We can use this string later when uploading to Twitter.

Of course, we don’t want to save and upload every still frame we capture, just the ones with motion. Fortunately, <canvas> gives us the tools to detect motion.


So, what exactly is “motion”? A video (such as a stream from a webcam) is just a bunch of still images shown in rapid succession. Movement is perceived when there are changes between these images. So to check if motion has occurred between two frames of a video, we check for differences between the two images, also known as “diffing”.

We’ve already covered how to draw images onto a <canvas> from a <video>. By default, drawing something onto a <canvas> just covers up whatever was already there, but this can be changed to show you the differences instead.

Here’s an example of an image drawn on top of another with the aforementioned difference setting. Dark pixels indicate areas with little to no motion. Brighter pixels show areas where something moved (in this case, mostly my cat’s head).

Diff image


We can see that motion happened on the <canvas>, but how do we turn this into data that we can programmatically evaluate or “score” for motion? The answer is getImageData(). This function returns an ImageData object that has a data property. This property is a long array of numbers, with every chunk of 4 numbers representing a single pixel (red, green, blue, and transparency).

Pixel data

Remember, when diffing, the brighter pixels indicate more difference which means more motion. So the higher the combined red, green, and blue values of a pixel, the more motion that occurred in that pixel. By scoring every pixel like this, we can determine if values are significant enough to consider a capture as having motion.

Here’s a quick algorithm to do this.

Post-Processing the Diff

Not only can we read pixel data from ImageData.data, we can also write adjustments back to it. By tweaking the pixel data this way and then redrawing it with putImageData(), you can essentially do post-processing on the diff image.

I like doing this to make the diff image monochrome green (set red and blue values to 0) and then amplifying the brightness (multiply green values by some constant). This makes it really easy to see where motion is. It also makes it easier to see the ambient visual noise, in case you need to tweak some threshold values to discern between noise and actual motion.

Diff image with post-processing

On top of that, I like to downscale the diff image to 10% of its original width and height. Even a modest 640px by 480px image is a lot of data (307,200 pixels!) to churn through, so downscaling helps lighten the load. Sure, the image becomes super pixelated, but it’s still plenty enough for the purposes of motion detection.

Diff image downscaled


One important thing I haven’t covered is throttling. Without throttling, any continuous motion will cause rapid-fire captures to be saved and uploaded to Twitter, 10 times a second. That’s no good.

There are a couple ways to handle this. This is what I went with.

Sequence for throttling

The important thing is the “chilling” state, which is just a cooldown timer. I also added a “considering” state, during which captures can occur continuously, but only the highest scoring capture is kept and uploaded. Hopefully, the highest scoring capture is the most entertaining capture, since it has the most motion.

The Back-End

Even though all the motion detection is done client-side, we still need a back-end since we’re using the Twitter API (can’t have our secret Twitter API key revealed on the front-end).

I won’t go into specifics about the back-end, since it’s not directly related to motion detection. I’ll just mention that I wrote it in node.js, so it still satisfies my “all in JavaScript” goal.

The Results

You can check out the finished web app for yourself. I call it Diff Cam Feed. It’ll ask you to sign in with Twitter so it can upload to your feed (that’s all it does, nothing shady, I promise).

Diff Cam Feed screenshot

I set it up on my laptop for a few trial runs around my apartment. Results were decent. Just my pets, doing pet things.

Various captures of pets

It also works fine on a Raspberry Pi running Raspbian, after installing UV4L.

Diff Cam Feed on a Raspberry Pi

Then I tried it on an old Android phone I had lying around (Samsung Galaxy S4). It works great, as long as you run it in Firefox. For some reason, Chrome on Android dies after a few minutes.

Diff Cam Feed on an Android phone


Overall, this turned out to be a fun side project. The motion detection looks really cool and isn’t very difficult to do. I made it all open source, so you can check out the code in this GitHub repo.

It’s not hard to imagine other uses for motion detection in a web app. If this interests you, then check out Diff Cam Engine. It wraps up a lot of motion detection stuff for you, so you can kickstart your own project.

Lastly, I’ve set up a website with several demos illustrating the concepts I talked about. Feel free to play around with them and check out the code.

Thanks for reading!

End of Year Update

It’s been a while, eh? Figured I’d make a quick post about some of the things I’ve been up to lately.


I won CareerBuilder’s Hackday competition back in August. The gist of Hackday is that you come up with an idea, research it, flesh it out, and then present it. If your idea is one of the top picks, then you win $10,000 and 6 weeks (paid work time) to implement it. I worked with 2 other people on an idea to ensure that job searches on our site always return some results (because anything is better than nothing). The idea won, we executed it, and it’s still live on the site now.

Rich Web Experience 2012

I went to the RWX 2012 conference in Florida. Very nicely executed conference. Plenty of sessions to pick from, though I felt like I made some bad picks a few times. Still, I learned a lot. I found myself particularly excited about MongoDB, which I didn’t know much about before. Can’t wait to spin up a project to try it out.

Other than that, the location was really nice. Right on the beach with plenty of restaurants and bars within walking distance. Good times.

Chrome Extension

I took on a fairly big side project to write my first Chrome extension: a tool for developers at CareerBuilder that lets us see logging, stats, and debug info for our web pages as they run. I can’t share it because, as you could imagine, it contains a lot of sensitive information about our environment. But hopefully I’ll get around to writing an in-depth blog post about it soon. The results were great, and even though it took much longer than expected (months), I really enjoyed making it.

CareerBuilder Hackathon

CareerBuilder had its first official Hackathon last week. We were given 24 hours, from noon to noon, to make whatever we wanted. We were given the time, office space, food and drink, and permission to work on anything without having to validate it to the business. It was awesome to just take an idea and run with it. And the general environment was a lot of fun. Definitely something I hope becomes tradition.

0 comments » Related topics: