2021-11-19T18:12:11.000Z Fullscreen Open in Tab
It's Hard to Say Who's Winning the Streaming Wars, But as a Customer I'm Definitely Losing

I've been known to rant about streaming UX in the past. At least this time my blog isn't rendered in JavaScript, so maybe we'll manage to stay on topic.

Yesterday, I tried to watch the 9th episode of Survivor 41. I say tried because apparently my computer science degree and 12 years of experience writing software are not sufficient to the task.

Normally I would accomplish said task using my YouTube TV account, but the episode was inexplicably unavailable, even though episode 8 was still there and episode 10 shows scheduled for next week. 9 was simply missing with no additional information. Even searching to see if anyone else was having the same issue yielded no results. As of this writing episode 9 is still nowhere to be seen from my YTTV account.

I asked my friend how she watched it. She said she had used her parent's xfinity account, but recommended going to CBS directly. I installed the CBS app on my phone, prepared to watch with commercials, which is annoying but bearable for (hopefully) a one-off situation. The episode started playing with no problem, and even had Chromecast support, which was necessary to watch on my projector which has a Roku Ultra connected. But for some reason Chromecast only detected the Roku TV in my roommate's bedroom, not the Roku Ultra.

I then tried installing the CBS app for my Roku Ultra. The app installed fine, but when I tried to watch Survivor, it required a login with a local TV provider. This is the point at which I stopped, to spare my sanity.

Bonus rant:

It just so happened that yesterday I also got a new toy in the mail, the Anker Nebula Capsule, which I intend to use to watch movies while camping. Let me say that I love it based on my preliminary testing. It's a well designed and executed product. However, apparently it's impossible to play DRM content via any sort of casting or screen mirroring technology. You either have to use the Netflix, Disney+, etc apps directly from the device (terrible input UX, especially for searching for content), or use your own DRM-free movies from a USB flash drive (which is what I plan to do).

I've also noticed that there seem to be a lot less NBA games available on YouTube TV than last year. I'm assuming some obscure licensing deal expired or changed, yet again to the detriment of customers.

I suspect things have gotten bad enough that the only way to have a reasonable UX is to set up some sort of pirated pipeline based on Plex, Emby, Jellyfin, etc.

I'd prefer to just pay these people rather than mess with any of that, but they're making it too difficult to justify.

2021-01-15T19:24:35.000Z Fullscreen Open in Tab
QEMU aarch64

This is a dump of everything I've learned about running 64 bit ARM on QEMU.

General Notes

  • I get the feeling that VGA isn't really supported on the -M virt machine. Closest I've been able to get is using ramfb, but I still can't start X. I think it might be best to use a specific qemu device such as a raspi.

Alpine aarch64 best settings

qemu-system-aarch64 \
    -M virt \
    -cpu cortex-a57 \
    -smp 4 \
    -m 4096M \
    -device ramfb \
    -bios /usr/share/edk2-armvirt/aarch64/QEMU_EFI.fd \
    -device usb-ehci -device usb-mouse -device usb-kbd \
    -drive file=alpine.qcow2

Links

2020-09-29T18:50:08.000Z Fullscreen Open in Tab
A Request to a YouTube Video Downloads the Title 14 Times and Displays it Twice

I have a project idea that could involve doing some scraping of YouTube videos, so I started poking around the HTML output of curling YT links. These things are a site (sp) to behold. If you curl the following well-known URL and store it in a file:

https://www.youtube.com/watch?v=dQw4w9WgXcQ

Just searching for the title yields 14 results, spread throughout random HTML and JS. But it's only actually displayed to the user once on the page, and probably again in the browser tab. It's not just the title either, there's a ton of duplicated data and bloat throughout the file. I'm guessing it compresses well. I checked a few other files and they all had between 10 and 18 copies of the title.

I'm not sure what conclusions to draw from this. A lot of them are obviously intended to be machine-readable for things like ogp, but do you really need 14 identical copies?

EDIT:

Since this errant observation somehow made it to the Hacker News front page, and eventually got flagged, I have a few more thoughts:

  • Sorry the title ended up more clickbatey than intended. It's not making 14 extra HTTP requests just for the title. It originally started with "An HTTP request" but it was a few characters too long for HN, and I didn't spend much time rethinking it.

  • I agree the extra text isn't a problem (like I said, that'll compress well). I'm more concerned about the underlying complexity it signals. There is more obvious evidence of this complexity (it makes 70 network requests when you load the page even if you pause the video immediately), this is just a novel one for me.

  • I appreciate the copies which are intended to interoperate with other systems like Twitter and OGP.

  • I actually appreciate the fact that a JSON blob of all the video metadata is embedded in the HTML. It'll make my scraping task much simpler.

2020-09-03T17:58:14.000Z Fullscreen Open in Tab
Clever Curl

My personal cURL cheat sheet

Downloading files

Resuming failed download

Need to make sure filename is the same in order for it to automatically determine the offset. The - after -C tells it to use the output file to determine where to continue from.

curl -O -C - https://example.com/dir/filename.bin

Output filename

-O uses the filename from the end of the path.

curl -O https://example.com/dir/filename.bin

This can be annoying if it has query parameters, because they will end up in the filename. If you add -J it lets curl use the filename provided by the server (via the Content-Disposition header) if one is available.

curl -OJ https://example.com/dir/filename.bin?param=value

Uploading files

Prefer --data-binary to --data/-d because it doesn't mess with newlines etc.

curl https://example.com/dir/filename.bin --data-binary @filename.bin

-T option will make a PUT request. Advantage over --data-binary is that it doesn't load the entire file into memory on the client. You can use -X POST to force it to use POST.

curl https://example.com/dir/ -T filename.bin
2020-09-01T18:10:12.000Z Fullscreen Open in Tab
TIL About json.tool

Somehow I never heard about json.tool until just now. It's built in to Python and provides a simple way to pretty-print JSON on the CLI:

echo '{"og": "hi there"}' | python -m json.tool
{
    "og": "hi there"
}

This is a nice alternative if you have Python installed and don't want to take the time to install jq.

2020-05-07T00:00:00.000Z Fullscreen Open in Tab
In Search of a Production-Quality JavaScript Framework

At work we use node+Webpack+Vue+Vuetify. In any isolated in-my-text-editor programming moment, this is more often than not a fantastic, almost magical experience. It's so easy to find and compose different powerful components and libraries. I truly feel like a wizard. But more and more, it seems that it's the in-between moments that are starting to define my relationship with these tools.

Our GitHub repos are constantly barraged with Dependabot warnings and pull requests concerning security vulnerabilities deep in our dependency tree. One of our apps is stuck on node 10 because the Webpack/Vue build process is failing for some reason we haven't had time to diagnose. Running npm outdated on any of our projects never makes you feel good. And when the framework abstractions themselves start to leak, it can be down-right disheartening0.

We try to keep things updated, but our subconscious minds have learned that this is almost always a painful process, so we avoid it even when we know we shouldn't.

Here's my take. I think reactive UI is the big idea. Any reasonably well-designed framework with a virtual DOM or equivalent (ie Svelte, Flutter), which enables an immediate-mode mental model for building UIs, gets you 80% of the way there. But I get the feeling that React and Vue are competing for that last 20%, and the result is a lot of breaking changes with diminishing returns in value to developers.

Even my personal favorite vdom framework for side projects, Mithril, bumped to version 2 a while back which caused breaking changes for my code. I remember what the breaking change was, but I can't remember any of the features added. I consider Mithril 1 to be a fantastic, 90% UI solution, in a nice small package.

Don't get me wrong, I still think Webpack, React, Vue, and Mithril are great and getting better, and I appreciate all the hard work that is going into them. This is important work. That's not my point.

Here's what I'm looking for. Somewhere out there, someone has built a very boring UI framework. A framework designed for stability. A framework with 5-10 year long-term support for each breaking change. This framework was probably developed internally at a large company. This framework is probably rather opinionated. This framework probably has a low number of high quality, stable external dependencies.

I'm going to find this framework, and I'm going to admire it from a distance, because my team is all-in on Vue at this point and I don't think there's any going back.

Footnotes

0 For any of you who may not have had the experiences that teach you when it's necessary to use Object.freeze on portions of your Vue state, if you do at some point, know that you're not alone and feel free to reach out to me for emotional support.

2020-03-27T00:00:00.000Z Fullscreen Open in Tab
Facebook has an opportunity to help curb a global pandemic

Perhaps the most concerning thing about the SARS-CoV-2 virus is that you can be simultaneously asymptomatic and contagious, for days. In a perfect world, we could all just stay away from each other indefinitely until we have some sort of a solution. Real-world constraints won't allow that.

Since some amount of human interaction is unavoidable, we need to look for other tools to help control the spread. In a world of perfect data, as soon as a person tested positive for COVID-19, we would be able to immediately inform everyone they've been in contact with recently, and everyone those people have been in contact with, and so on until we've alerted everyone within the max potential contagious period who hasn't already been warned by a shorter path through the social graph.

For better or worse, we don't have access to this perfect data. However, Facebook might be our best approximation. They have 2.5 billion active monthly users, and their apps default to tracking user location for targeting ads.

What I'm wondering is why doesn't FB add a big blue "I have tested positive for COVID-19" button to everyone's account, and then use their social graph and location data to implement an algorithm like what I described above?

Maybe this would be too much computational overhead at FB's scale. I'm not familiar with their query infrastructure. But that leads me to my second point.

How this would look with an open, decentralized social network

Imagine a social network devoted to nothing but slowing the spread of COVID-19 cases. All it would need to do for a given user is

  1. Track their location
  2. Allow them to connect with friends in a bi-directional manner the same way FB does
  3. Have the one single button for them to report they've tested positive
  4. Automatically warn all their 1st-degree connections (possibly including a dump of location data during the max contagious period).
  5. Include algorithms for processing alerts received from your friends and determining the likelihood you've been exposed, and passing that information on to your other friends.

This social network would run on a simple, open protocol. Ideally it would be federated with multiple providers which offer easy signup. It could even be implemented as a thin wrapper on top of email, though I think even that adds more complexity than is necessary.

In terms of scale, I personally have ~1600 Facebook friends. My guess from years of observing how many friends my friends have, is that most people have <1000, and a rather few have >2000. So in most cases you'd be looking at a given instance sending maybe 2500 requests to alert all their friends' instances they've been infected. Receiving shouldn't be a problem, since it's unlikely all your friends get sick at the same moment.

Of course I'm probably missing something obvious, but maybe not. All this has me wondering if there might be a legitimate opportunity for a new social network to arise. We know Facebook-the-app is awful. It's tainted by the incentives from which it arose. I believe most of us could design a superior user-centric UX for the core functionality inside a month. However, due to network effects, Facebook-the-network is basically indomitable.

But all eyes are on COVID-19. People are cooped up in their houses wishing they could take some sort of action to make things better. Maybe they could.

2020-01-28T00:00:00.000Z Fullscreen Open in Tab
Get yourself off Google Analytics, in 5 minutes, for free, without self hosting

Google Analytics ("GA") is free and easy to use. The reason it's free is because Google is using you to get to your users. Every time someone visits your site without a blocker, Google fingerprints their browser and tracks them across sites they visit. This information is used to send that person targeted ads and manipulate them into buy crap they don't need.

So how do we get the convenience and benefits of analytics that GA provides, without selling out our users? Fortunately, there are a lot of small projects and companies that are popping up to address this.

The Criteria

When I started looking for an alternative, these were my main criteria:

  1. Needed to be very easy to get started. GA is sign up for an account then copy/paste a small script.

  2. Offered as a hosted service. I've tried self hosting in the past, but devops just doesn't interest me as a hobby. I'm fine doing it for a few things, but not every service I need.

  3. No vendor lock-in. Preferably this means open source, but exporting all my data easily to a common format would also work.

  4. Respect user privacy and agency.

The Contenders

Based on my quick research, here are the projects that caught my eye:

  1. GoAccess. Uses server-side logs. Open source. This would work for a lot of my projects, and eventually I'd like to use it, but I don't want to deal with the hassle of figuring that all out right now. See Appendix A.

  2. GoatCounter. Dead simple GA alternative. Source code is available for self-hosting. Free for personal use. This is what I went with.

  3. SimpleAnalytics. Nice GA alternative. Expensive (19USD/mo). Source is not available.

  4. Matomo. Old guard GA alternative. Open source. Lots of features. Hosted is expensive (19USD/mo starting tier). Looks complicated. PHP.

The Choice

In the end, GoatCounter looked the most promising, so I tried it first. I immediately fell in love. It took 5 minutes to set up my account (didn't even have to create a password), and copy the script into my site. The UI is very simple and intuitive. It shows me the information I care about, without all the noise. It's so refreshing. Been using it for a week with no issues. My plan is to remove GA from this site, after the next time I have a high-traffic post, so I can compare them under load.

If you use GoatCounter, please consider signing up for one of the paid options, or at least sending a few $ their way. The internet needs more services like this. I'm not affiliated with GoatCounter in any way; just a happy customer.

Caveats

None of the GA alternatives I've seen have all the features GA offers. Personally, I realized that I only ever look at the basics anyway. I want to know how many people are visiting my site, where they are coming from, which pages they are visiting, and some basic device information.

You can go down a deep rabbit hole of analyzing how users navigate your site, where they click, where they drop off, etc, etc. But in my opinion, many of these paths lead to you manipulating your users into using your site the way you want them to. If you have good content, people will read your site. If you have a good product that solves a real problem, people will buy it. Sure, you should follow good design principles to make your site usable, but no amount of design can fix a site that doesn't offer any value.

Addendum

It's funny how sometimes we use things in subtly different ways because of a large jump in performance. Here's what I mean. I've never once tried checking my GA numbers from my phone browser. I never really thought about that, I just didn't do it. But with GoatCounter I noticed that I had started using my phone to check my daily pageviews. After a bit of thinking, I realized that I simply have 0 faith that the GA site would provide a good experience in a mobile browser, based on it's desktop performance and the performance of other Google products. I imagine it would be slow and painful. My subconscious had ruled it out before I ever even considered it. GoatCounter gives the exact opposite impression. It screams simplicity and performance, just begging to be used on an entry-level smartphone.

Appendix A - Server logs vs SaaS analytics

When choosing server- vs client-side analytics, here are the things I think about:

  1. Not all static-site services (GitHub Pages, Netlify, etc) offer access to the server logs, and if they do sometimes it costs money. Client-side gets around this.

  2. Server-side is nice because you don't need JavaScript.

  3. Client-side is nice because you can get more information, such as screen size.

  4. If you're using a CDN (like CloudFlare), using server-side will dramatically throw off your numbers, because not all requests are hitting your servers.

2020-01-20T00:00:00.000Z Fullscreen Open in Tab
This site is now browsable with netcat/plain TCP

This is getting out of hand. First, I wrote an unrelated post, and Hacker News got a little upset with me about my blog requiring JavaScript. I ultimately agreed with them. To redeem myself, I updated my site to not just serve static HTML, but to be entirely browsable with nothing but cURL. Full background and details in this post.

However, I kept going down the rabbit hole, wondering how simple serving blog content can realistically be. Here's how far I've gotten. If you have netcat installed, you can browse my site by running this command:

nc apitman.com 2052 <<< /txt/feed

There are further instructions at the top of that "page" describing how to navigate.

It uses an extremely simple protcol. You can use any TCP client. Just open a TCP connection to apitman.com on port 2052, write a path in plaintext, and it will return the contents if found. I call it the newb protocol.

Note that newb connections are not encrypted. If you want private browsing, you'll need to use HTTPS, either with cURL or a browser.

If you don't have netcat, it's simple to write a newb client in your favorite language. Here's one in node:

#!/usr/bin/env node

const net = require('net');

const client = new net.Socket();
client.connect(2052, 'apitman.com', function() {
	client.write('/txt/feed');
});

client.setEncoding('utf8');

client.on('data', (data) => {
  console.log(data);
});

And Python:

#!/usr/bin/env python3

import socket

with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
    s.connect(('apitman.com', 2052))
    s.sendall(b'/txt/feed')

    chunks = []
    while True:
        chunk = s.recv(1024)
        if not chunk: break
        chunks.append(chunk.decode())
    print(''.join(chunks))

Those can both easily be adapted to take the path from the command line. And the host address too if anyone else decides to implement the newb protocol ;) Speaking of which, the server code is here.

2020-01-15T00:00:00.000Z Fullscreen Open in Tab
I just learned about HTML redirects

While trying to find a way to redirect NoScript users to the text version of my site, I discovered HTML redirects. How did I not know about these before?! Basically, it's a way to tell the browser to navigate to a different URL, without HTTP codes or JavaScript. To use it on my site, I just put the following in the head of my index.html:

<noscript>
  <meta http-equiv="refresh" content="0; URL='https://apitman.com/txt/feed'" />
</noscript>
2020-01-14T00:00:00.000Z Fullscreen Open in Tab
This site is now browsable with cURL

Update: The insanity has gone even deeper and this site is now browsable using plain TCP.

TL;DR - This site is now browsable from the command line using cURL. Give it a try:

curl https://apitman.com/txt/17

Yesterday, I wrote a quick rant essentially whining about how I couldn't figure out how to get my Roku smart TV to play a video I was hosting. I ended the post with the following:

When you build hardware/software, please make it support the primitive, simple case ... HTTP is the lingua franca of the internet. When you build stuff, please make it work with simple URLs.

There was some good discussion about the woes of smart TVs, and some great suggestions for alternative solutions, but a large number of comments were focused on the fact that my blog is rendered in JavaScript. Here are a few of my favorite exerpts:


Please make your stuff work with reader view.


Please make your stuff work without JavaScript. This is a simple text-only web page (in courier new ffs) and it refuses to load with JavaScript disabled. There is no valid reason for this.


Here is my plea: when you build hardware/software, please make it support the primitive, simple case.

Says a webpage that is just a couple empty divs w/o JS, and, with JS, is 4 hyperlinks, a few paragraphs of text, and absolutley nothing (aside from google-analytics) that ever needed any JS in the first place, let alone 5 or 6 files' worth of it. But, I think that conflict really speaks to the funamental issue, here: Thinking about the primitive, simple case is often, from the creator's perspective, more work than it's worth.


I mean, if you're going to wax lyrical about how what your making should support that basics, you should at least first make sure you're doing the same thing.


I mean, it loads fast for me.


I consider that stage of fuckwittedness either absolutely deliberate (which it appears to be in this case based on the authors defence of their practices), or utter incompetence.


And when it does load, it's white Courier on black, as if specifically designed for poor usability and accessibility.


Quote "HTTP is the lingua franca of the internet. When you build stuff, please make it work with simple URLs." Says the one who can't have a simple HTTP only site and I had to enable JS on NoScript in order to read his rant.


8 hours in and Spitman's site is still a black page, even with JS enabled.


Instead, you push that compute off on your readers; actively harming the environment in the process. Yes, in this day and age I feel it's quite justifiable to point that out. You could render it once and be done with it, but instead you chose to have it be rendered (inefficiently) tens or hundreds of thousands of times, consuming orders of magnitude more energy and generating heat, all to "avoid a dependency". This should be, in this day and age, morally reprehensible.


At the end of the day, my site did not support the simple case, making me at best a hypocrite, at worst a planet destroying megalomaniac.

Well, that ends now.

The thing is, I totally agree with these folks. It makes me sad how complicated these insane virtual machines we call web browsers have become. In part because of this complexity, browser competition is dying.

To quote myself from a comment yesterday:

In my perfect world, the JS-app functionality of browsers would be broken off into a separate "app player" application that users would download, and browsers would be stripped down to basic HTML and a subset of CSS (essentially fonts, colors, and flexbox). In that world I would definitely have an HTML-only version of my blog.

But the more I thought about it, I don't think even HTML/CSS is truly supporting the simple, primitive case. At the end of the day, most of my content is text. And text requires only Unicode (and often just ASCII). So, in that spirit, as of today my content is totally accessible using nothing but cURL, or any other HTTP client.

This post can be accessed here:

curl https://apitman.com/txt/17

To see a feed of my posts, go here:

curl https://apitman.com/txt/feed

There's a navigation section at the top of each page with links and cURL commands to the other sections. I've found it pretty easy to navigate, both using copy/paste from the CLI, and quickly modifying the URLs in the browser, and using the back/forward buttons.

I know this whole thing might come off as sarcastic, or passive aggressive, and I totally admit that was part of my motivation at first. But the more I worked on this, the more I liked it. It made me think a lot more about accessibility, and what would happen to a lot of content on the web if we didn't have our JS VMs for some reason.

This also gave me a greater appreciation for Markdown, and other human-readable text formats. When I first re-wrote my site a few months ago, I struggled to choose between Markdown and plain HTML. Markdown seemed like a dependency to me. But now I realized HTML has a much bigger dependency: a web browser. In the simple case, you don't need a Markdown renderer to read this post.

This is still an early experiment. I've thought about how I could implement things like forward/back. I'd love to hear if anyone else has ideas for how to improve the experience.

Specifically, I have one open question: while working on this, I realized just how ugly inline links (especially long ones) can be in unrendered Markdown. For this post, I put them at the bottom in reference section style. But I'm wondering how this is for accessibility. What do visually impaired folks prefer?

2020-01-13T00:00:00.000Z Fullscreen Open in Tab
Please make your products work with URLs

I want to tell you about something I was unable to accomplish, after more than 30 minutes of concerted effort.

I have video file hosted using a web server. The file is H.264 main-profile encoded at a reasonable bitrate (<5Mbps), uses AAC audio, and is packaged in an MP4 container. The web server supports HTTP range requests. In other words, the video is basically in the least common denominator format for compatibility. It streams great in all major web browers, VLC, and everything in between.

In my living room, I have a Roku "smart" TV. It has tons of apps, full internet connectivity, and is more than capable of both connecting to and playing the video file described above. But I failed to get this to happen, after much googling and trying multiple apps (both on the Roku TV and my Android phone).

The way this type of thing is usually accomplished in 2020 is to open the video on your phone, then tap a "cast" icon and tell it to send the video to your TV. What happens behind the scenes is the phone uses some protocol (Chromecast being the most common I'm aware of) to send the URL to the TV, and the TV then plays it directly, while still letting you play/pause, seek, change volume, etc from the phone. When this works, it's like magic. The YouTube app works particularly well. However, there doesn't seem to be any widely implemented standard for playing plain URLs, only walled gardens like the YouTube app.

This whole thing was made much more frustrating by the fact that I knew the TV had all the requisite capabilities to do what I was attempting. The YouTube app is proof of that. There just wasn't any obvious way to find the correct app combination.

Here's the way this should work.

The Roku app for Android allows you to use your phone as a keyboard for the TV, rather than the awful physical remote UX for input. This is a great feature which I appreciate.

I should be able to copy a URL from my phone (possibly obtained from scanning a QR code), paste it into the Roku Android app, and the Roku should attempt to play the file at the URL. This is clunky, awkward, and not particularly easy. But it is simple, obvious, and intuitive.

Here is my plea: when you build hardware/software, please make it support the primitive, simple case. By all means, implement the slick Chromecast-style flows. It's great when it works. But there needs to be a fallback for when it doesn't work, or when the user wants to try something slightly different. HTTP is the lingua franca of the internet. When you build stuff, please make it work with simple URLs.

2020-01-06T00:00:00.000Z Fullscreen Open in Tab
Untitled

Services are just dynamic libraries where HTTP is the ABI

2019-10-02T00:00:00.000Z Fullscreen Open in Tab
Compiling libfuse examples

I recently started looking at libfuse. It wasn't immediately obvious to me how to compile the examples, such as this one.

A naive gcc -lib hello.c yielded these errors:

In file included from /usr/include/fuse/fuse.h:26,
                 from /usr/include/fuse.h:9,
                 from hello.c:23:
/usr/include/fuse/fuse_common.h:33:2: error: #error Please add -D_FILE_OFFSET_BITS=64 to your compile flags!
   33 | #error Please add -D_FILE_OFFSET_BITS=64 to your compile flags!
      |  ^~~~~
hello.c:55:11: warning: 'struct fuse_config' declared inside parameter list will not be visible outside of this definition or declaration
   55 |    struct fuse_config *cfg)
      |           ^~~~~~~~~~~
hello.c: In function 'hello_init':
hello.c:58:5: error: dereferencing pointer to incomplete type 'struct fuse_config'
   58 |  cfg->kernel_cache = 1;
      |     ^~
hello.c: At top level:
hello.c:84:10: warning: 'enum fuse_readdir_flags' declared inside parameter list will not be visible outside of this definition or declaration
   84 |     enum fuse_readdir_flags flags)
      |          ^~~~~~~~~~~~~~~~~~
hello.c:84:29: error: parameter 6 ('flags') has incomplete type
   84 |     enum fuse_readdir_flags flags)
      |     ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
hello.c: In function 'hello_readdir':
hello.c:93:2: error: too many arguments to function 'filler'
   93 |  filler(buf, ".", NULL, 0, 0);
      |  ^~~~~~
hello.c:94:2: error: too many arguments to function 'filler'
   94 |  filler(buf, "..", NULL, 0, 0);
      |  ^~~~~~
hello.c:95:2: error: too many arguments to function 'filler'
   95 |  filler(buf, options.filename, NULL, 0, 0);
      |  ^~~~~~
hello.c: At top level:
hello.c:131:20: warning: initialization of 'void * (*)(struct fuse_conn_info *)' from incompatible pointer type 'void * (*)(struct fuse_conn_info *, struct fuse_config *)' [-Wincompatible-pointer-types]
  131 |  .init           = hello_init,
      |                    ^~~~~~~~~~
hello.c:131:20: note: (near initialization for 'hello_oper.init')
hello.c:132:13: warning: initialization of 'int (*)(const char *, struct stat *)' from incompatible pointer type 'int (*)(const char *, struct stat *, struct fuse_file_info *)' [-Wincompatible-pointer-types]
  132 |  .getattr = hello_getattr,
      |             ^~~~~~~~~~~~~
hello.c:132:13: note: (near initialization for 'hello_oper.getattr')

I should have looked closer, since this line is in the file header comment:

gcc -Wall hello.c `pkg-config fuse3 --cflags --libs` -o hello

As you can see, you need pkg-config and libfuse installed. I had both of those, but not the correct command.

2019-09-27T00:00:00.000Z Fullscreen Open in Tab
Before You Reach For That Dependency

Introduction

We love talking about "fatigue" in the JavaScript community. You know what I've been getting a bit fatigued by lately? This:

I've been thinking a lot about dependencies. This is something of a brain dump.

I'll start by giving my working definition of what a dependency is, then I'll go through a list of common dependencies. Finally, I'll present a simple model I've started using for classifying dependencies.

What is a dependency?

For the purposes of this article, I will define a dependency as any component of an experience which is outside your direct control as the developer of that experience.

Note that this definition doesn't say anything about software. Software experiences (ie apps, websites, games, etc) are only one type. Driving your car down the road, attending a concert, or playing football are all experiences, and all have dependencies. Ever tried to play football without a ball? Whoever invented the rules of football was free to design the experience however they wanted, but ultimately the experience always depends on some sort of a ball being available at "runtime".

Note that some dependencies are optional. It's ideal to wear special shoes when playing football, but it can be done with normal shoes or even barefoot, with the experience being diminished to varying degrees.

For the remainder of this article, I'm going to focus on dependencies of software experiences.

The more time users spend with a piece of software, the more they get used to a specific experience. In general, we want to avoid changing an experience unless we have a very good reason to do so (ie adding a feature that we are confident will significantly improve the experience).

The central thesis of this article is that dependencies create openings for the experiences we develop to change without our deliberately wanting them to, and so we should be thoughtful and careful about the dependencies we take on.

Examples of Dependencies

Here's a list of common dependencies.

Hardware

The hardware your software runs on is a dependency. In general, you as a developer have little to no control what hardware your users run your software on. A user may be perfectly happy running your app on the latest flagship Android phone, but then they lose their phone and have to downgrade to a budget model for a few months, and suddenly your app is unusable for them.

Note that developers who make software for Apple products have a huge advantage here. The number of devices they need to test on is vastly smaller than developers for Windows, Linux, and especially Android.

Hardware is particularly a challenge with web development, where the same software stack is used to develop for every imaginable type of hardware. Not only that, but this stack is running a dynamic, interpreted language with many layers of security and abstraction. One of the most exciting things about WebAssembly is the potential to normalize web app performance across a wider range of hardware[1].

In cases where you can control the hardware, it's amazing the levels of consistency and reliability you can achieve. Last year I developed some forearm pain in both arms from typing too much, so I made some Arduino-based foot pedals so I didn't have to strain so much to hit combo keys. I have a pair at home and work, that have both been working 24/7 for a year. No failures, no glitches, no reboots necessary.

Operating systems

If your app is too tightly coupled to a specific version of an operating system, when the user updates their computer your app might quit working. This is a much bigger problem on systems like Linux. I often have issues with apps not being able to find the right versions of dynamic libraries.

From what I've heard, Windows has an excellent backwards-compatibility history. Mobile OSes seem to be somewhere in the middle.

One example I've seen is where an OS adopts a specific design paradigm (such as Material Design for Android), and there becomes pressure to overhaul your app so the UI is consistent with the rest of the OS.

Programming Language Compilers/Runtimes

The functionality, performance, and distribution of your app are deeply tied to your choice of programming language. Fortunately, these are some of the most stable dependencies around.

An obvious exception is new or quickly evolving languages. Rust syntax looks very different post-1.0 than it did in the beginning, and even today the async story is rapidly changing.

One situation I wouldn't want to be in is having written an app in the latest compiles-to-JavaScript language, then have the language go extinct 2 years later. This is less of a problem if you only need codebase to live for 2 years, but I'm not sure how common that is (or at least should be).

Build Tools

Bundlers, transpilers, minifiers, uglifiers, etc.

They're called "dev dependencies" for a reason. Have you ever done a fresh, 'npm init' followed by 'npm install webpack webpack-cli' then taken a peek in node_modules?

Speaking of npm, I think it belongs here as well. Yes, npm is a dependency. Especially if the experience you're providing is a library that is only installable by using it. It's perfectly possible to write a node service or browser application without having a package.json at all. As a matter of fact, that's the case with this website, and my personal website, both of which are single-page apps with few dependencies.

Note that Python/pip is a similar story to npm.

The Internet

The internet can be a huge dependency, and if interfaced with poorly, a huge liability. The quality of different connections (and even a single connection over time) varies wildly, and is affected by outages, congestion, solar flares, Georgian women with shovels, and whatever the cloud decided to have for breakfast on a given morning.

If your app relies on an internet connection at runtime, almost by definition you are shipping a software experience which is constantly changing. Just because people have become accustomed to dealing with slow internet connections doesn't mean it's ok for us to abuse the internet as a dependency. There are many techniques for improving the user experience, the most basic of which is communicating what exactly is going on.

Time-to-first-byte is often touted as one of the most important attributes of web software. Maybe that's true. I'm inclined to question it, and I think it depends on whether you're talking about a content website, or a web app. If it's an app, and I have a choice between waiting 5 seconds for it to download all the code and enough data for page changes to be instantaneous, vs an instant first page followed by variable page loads later, content jumping around as things stream in, etc, I'll take the 5 seconds every time. Especially if it gives me a loading bar.

It's all about expectations. For year, gamers have been waiting hours on huge downloads before they can even run the game. Because once the download is done, the performance is great.

Note that fast first load and instant page navigations are not mutually exclusive.

Web Browsers

These days, browsers are essentially in the same category as operating systems. If Chrome, Firefox, or Safari decided one day to make a major change, your app could instantly break for thousands of users. Fortunately, web browsers generally have an exceptionally good backwards compatibility story. Our current browsers will gladly run JS from 10 years ago, and I expect the JS I'm writing today to still work 10 years from now. That's impressive.

Web Links

Links are a central part of the web. However, they also make any given web experience incredibly brittle. Any web page you make is dependent on every link on that page. If you link to an external page, and that page disappears (which happens often), your experience is now broken.

You could always link out to the Internet Archive, but then you're centralizing all your link dependencies. I think the long-term solution to this problem could be something like IPFS, where websites pin versions of everything they link to. But that has its own problems, like if you link out to an insecure version of a web app. This would basically be the web's version of static linking.

Frameworks/Libraries

These ones are obvious. They're what I usually think of first when people talk about dependencies. If you're using a large framework that does a lot of heavy lifting for you app, you're at the mercy of that framework (and its likely many sub-dependencies) for your experience to remain consistent. The less you understand what that framework is doing under the hood, the more vulnerable you are.

That doesn't mean frameworks are bad. A new developer with an exciting idea might be able to crank out a prototype using a framework where they would otherwise get bogged down with platform details.

However, in general I advocate learning the platform over time, not necessarily to avoid using frameworks, but to reduce the vulnerability that comes from dependance on them. If your framework just can't do what you need it to (or as performantly as you need it to), ideally you should be able to throw it out and implement a bespoke replacement.

Just a few days ago, the inventor of NPM made a change in minipass, which broke node-pre-gyp (I still don't know how exactly it depends on minipass, since it's not a direct dependency...), which broke bcrypt, which broke our Docker build, and a lot of other people's stuff. Kudos to them for working to fix it right away.

Note that although pinning versions of the libraries we're using would have avoided this particular problem for us, and is a good idea in general, it is not a silver bullet. package-lock.json can save you from your app breaking overnight, but in general there are going to be security updates and other issues.

A few months ago we had a dependency that hadn't been maintained in years. It had gotten so old that it required an ancient version of node to work properly. Eventually it started preventing us from keeping our other dependencies up to date, because they depended on modern JavaScript features. Languages and runtimes are relatively stable, but they DO change.

Datasets

If your app/experience relies on a specific dataset, that's a dependency. One example would be an interactive data visualization. If you collected and control the data yourself, this likely isn't a problem. However, if the data comes from a 3rd source that is constantly changing, you're dependant on that source, or risk the data becoming stale. Even if the data doesn't change, you may be dependant on a 3rd party not changing their usage policies.

APIs

Closely related to datasets is APIs, which are often used to access datasets such as Twitter. Over time, companies have consistently locked their APIs down more and more to prevent 3rd party developers from making alternative interfaces. This makes sense from a business point of view; you can't show ads on an app you don't control. If you're going to rely on an API to develop your app, make sure you understand the business incentives of all parties involved, and what that likely means for the long-term viability of your app.

A Mental Model for Categorizing Dependencies

There are many different metrics we can use to gauge the risk of adding a certain dependency, or to compare different dependencies with each other. One criteria is how much control do you have over the dependency itself? A small, in-house library provided by the team down the hall is much lower risk than an off-the-shelf closed source framework. In general, the less work a dependency is doing for you, and the more generic it is (ie easy to replace with an alternative), the safer it is to use it.

As I was thinking about this, I started breaking dependencies down into several distinct categories. This is how I think of them:

  1. Platform Dependencies
  2. Data Dependencies
  3. Logic Dependencies

Platform Dependencies

Platform dependencies are the most fundamental type. These include hardware, operating systems, web browsers, programming language compilers/runtimes, and APIs. Pretty much every software project is going to have platform dependencies. APIs are unique here because they are much more risky than the others. Rarely do popular APIs provide their source code, and even if they did, usually it's the data behind it that's actually valuable. You're giving a 3rd party complete control over the functionality of your app, with a high liklihood of it changing. However, sometimes there is no choice. If you want to develop a Facebook app, you have to use their API.

Platform dependencies are the level where you're almost certainly wasting your time trying to build it yourself (which doesn't mean that's never a good choice).

Data Dependencies

Data dependencies include things like public datasets, well-known lookup tables, and sometimes even algorithms. One nice thing about these is that they are often strongly related to something in the physical world, which lends them a certain gravity that helps prevent them changing over time. For example, the CORDIC algorithm/lookup tables have been around essentially unchanged for many years, and will be useful in this form for many more, because they are closely tied to a) math and b) fundamental hardware architecture that is almost univerally used in our current computing systems.

When re-writing my personal website recently, I tried to avoid dependencies as much as possible. The site does have 2 though: a markdown-to-html converter and a syntax highlighter. The syntax highlighter is a great example of a data dependency. The core logic is unlikely to change from language to language, and might be something I would consider writing myself. However, it's not worth my time duplicating the effort already put into creating grammars for all the supported languages.

Given their ubiquity and stable nature, I don't worry too much about using data dependencies.

Note that public datasets that are trapped behind an API aren't pure data dependencies, unless you can download the entire dataset.

Logic Dependencies

Logic dependencies are the least desirable (and most avoidable) type. Logic here refers to basic programming logic, ie if/then, loops, etc. Frameworks and most libraries (except thin wrappers around datasets) are in this category. These dependencies include basically any unit of functionality which you could write yourself and avoid the dependency. However, there's a tradeoff here. The more complicated the job being done by the dependency, the more you should consider whether it's worth doing it yourself.

My rule of thumb is that if I'm not familiar with the inner workings of a dependency, to spend a bit of time trying to implement it myself. Maybe an hour or two. Maybe a day or two. Sometimes I give up and decide to use the dependency. Sometimes I realize I only need a tiny piece of the functionality and implementing it myself is the right answer. Either way, I learn a lot and can make a faster decision the next time. Plus if I do take on a dependency, I likely have a much better idea what it's doing for me after going through this process.

Client-side routing is one thing I recently realized is simpler than I thought (for the features I need, at least) and don't always need a library for.

Something that I doubt I'd ever try to write from scratch is a WYSIWYG HTML editor. It's a very complex task, and there are already high quality, pluggable solutions out there.

Conclusion

Dependencies are a necessary part of developing useful software experiences. However, there is always a cost associated with taking on a dependency. Generally, I try to avoid dependencies, and when I do need them, I try to only use them in places where they could be swapped out for a similar option. The syntax highlighter mentioned earlier is a great example of this. Doing this is easier if you make a wrapper that only exposes the features you need.

I hope I've given you one or two new ideas to consider the next time you're faced with the choice of whether to take on a dependency.

[1] Let's be honest, web developers will probably find a way to waste it.

2019-09-24T00:00:00.000Z Fullscreen Open in Tab
The 64 Milliseconds Manifesto

In an interactive software application, any user action SHOULD result in a noticeable change within 16ms, actionable information within 32ms, and at least one full screen of content within 64ms.

2019-09-23T00:00:00.000Z Fullscreen Open in Tab
Debugging iOS Safari

It took me a sadly long time to realize my new website wasn't working at all on iOS safari (likely not desktop either). That's one of the pitfalls with making it a single page app. I still think the tradeoffs are worth it though.

Anyway, I needed a way to debug it, since Safari Web Inspector relies on having a Mac, and I'm on linux. Turns out this isn't a new problem, and the folks at google have an excellent debugging tool that was easy for me to set up:

ios-webkit-debug-proxy.

Once I got the console working it became pretty obvious that the global config I had defined wasn't being seen inside my ES modules. Neither Firefox or Chrome have this behavior. Not sure what the standard says, but I was able to fix it by passing the config in to my components, which is a better practice anyway.

2018-06-27T00:00:00.000Z Fullscreen Open in Tab
Rust, React, and WebAssembly

Note: This post originally appeared on Fullstack React here.

Introduction and Motivation

In this post, we're going to show how to compile some Rust code to WebAssembly, and integrate it into a React app.

Why would we want to do this?

It has become very popular in recent years for JavaScript to be used as a compilation target. In other words, developers are writing code in other languages, and compiling that code to JavaScript. The JavaScript can then be run in a standard web browser. CoffeeScript and TypeScript are both examples of this. Unfortunately, JavaScript was not designed to be used like this, which presents some difficult challenges. Some smart people recognized this trend, and these challenges, and decided to make WebAssembly (aka WASM). WebAssembly is a binary format designed from the ground up to be a compile target for the web. This makes it much easier to develop compilers than it is for JavaScript, and also opens up lots of potential performance gains. As an example, with WASM it's no longer necessary for the browser to parse the code, because it's already in a binary format.

There are lots of ways to get started with WebAssembly, and many examples and tutorials already out there. This post is specifically targeted at React developers who have heard of Rust and/or WebAssembly, and want to experiment with including them in a React app.

I will cover only the basics, and try to keep the tooling and complexity to a minimum.

Source

Complete source code for the final running example is available on GitHub

Prerequisites

You'll first need to have Rust and node installed. They both have excellent installation documentation:

Create the React App

We'll start with a barebones React app. First, create the directory react_rust_wasm, and cd into it.

Create the following directories:

src
build
dist

Then, initialize the npm package with default options:

npm init -y

Next, install React, Babel, and Webpack:

npm install --save react react-dom
npm install --save-dev babel-core babel-loader babel-preset-env babel-preset-react webpack webpack-cli

Then, create the following source files:

dist/index.html:

<!doctype html>
<html>
  <head>
    <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/>
    <title>React, Rust, and WebAssembly Tutorial</title>
  </head>
  <body>
    <div id="root"></div>
    <script src="/bundle.js"></script>
  </body>
</html>

src/index.js:

import React from 'react';
import ReactDOM from 'react-dom';

ReactDOM.render(
  <h1>Hi there</h1>,
  document.getElementById('root')
);

We will also need a .babelrc file:

{
  "presets": [
    "react",
    "env",
  ],
}

And a webpack.config.js file:

const path = require('path');

module.exports = {
  entry: './src/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        exclude: /node_modules/,
        use: {
          loader: 'babel-loader',
        }
      }
    ]
  },
  mode: 'development'
};

You should now be able to test that the React app is working. Run:

npx webpack

This will generate dist/bundle.js. If you start a web server in the dist directory you should be able to successfully serve the example content.

At this point we have a pretty minimal working React app. Let's add a button so we have a little interaction. We'll use the button to activate a dummy function that represents some expensive computation, which we want to eventually replace with Rust/wasm for better performance. Replace src/index.js with the following:

import React from 'react';
import ReactDOM from 'react-dom';

function bigComputation() {
  alert("Big computation in JavaScript");
}

const App = () => {
  return (
    <div>
      <h1>Hi there</h1>
      <button onClick={bigComputation}>Run Computation</button>
    </div>
  );
};

ReactDOM.render(
  <App />,
  document.getElementById('root')
);

Now if you should get an alert popup when you click the button, with a message indicating that the "computation" is happening in JavaScript.

Adding a Splash of Rusty WASM

Now things get interesting. In order to compile Rust to WebAssembly, we need to configure a few things.

WebAssembly Dependencies

First, we need to use Rust nightly. You can switch your Rust toolchain to nightly using the following command:

rustup default nightly

Next, we need to install the necessary tools for wasm:

rustup target add wasm32-unknown-unknown
cargo install wasm-bindgen-cli

Create the Rust project

In order to build the Rust code, we need to add a Cargo.toml file with the following content:

[package]
name = "react_rust_wasm"
version = "1.0.0"

[lib]
crate-type = ["cdylib"]

[dependencies]
wasm-bindgen = "0.2"

You can ignore the lib section for this tutorial. Note that we have wasm-bindgen in the dependencies section. This is the Rust library that provides all the magic that makes communicating between Rust and JavaScript possible and almost painless.

Now create the source file src/lib.rs to contain our Rust code:

#![feature(proc_macro, wasm_custom_section, wasm_import_module)]

extern crate wasm_bindgen;

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
extern {
    fn alert(s: &str);
}

#[wasm_bindgen]
pub fn big_computation() {
    alert("Big computation in Rust");
}

I'll break this down a bit for those who might be new to Rust.

#![feature(proc_macro, wasm_custom_section, wasm_import_module)]

The first line is telling the Rust compiler to enable some special features to allow the WebAssembly stuff to work. These features are only available in the nightly toolchain, which is why we enabled it above.

extern crate wasm_bindgen;

This is how you include code from externally libraries (known as "crates") in Rust.

use wasm_bindgen::prelude::*;

Rust has an excellent module system to keep your code cleanly separated. This line tells the compiler that we want to be able to directly access everything in the wasm_bindgen::prelude module. Prelude modules are a convention in the Rust community. If you create a library for others to use, it's common to include a prelude module which will automatically import the most important pieces of your API, to save the user the trouble of individually importing everything.

#[wasm_bindgen]
extern {
    fn alert(s: &str);
}

The extern keyword declares a section of code which is defined outside our Rust source. In this case, the alert function is defined in JavaScript. The wasm_bindgen is invoking a Rust macro which bridges that block of JS code so it can be used from Rust. Macros in Rust are very powerful. They're similar to C/C++ macros in what they can accomplish, but much nicer to use in my experience. If you've never used C macros, you can think of a macro as a way for the compiler to transform your code or generate new code based on parameters provided at compile time. In this case, the wasm_bindgen macro takes care of generating all the plumbing between Rust and JavaScript, based on the function names we provide.

#[wasm_bindgen]
pub fn big_computation() {
    alert("Big computation in Rust");
}

This is a normal Rust function, except that once again we're using the wasm_bindgen macro to generate the plumbing. In this case, the big_computation function is being made available to be called from JavaScript. When called, this function calls the alert function, which as we saw above is defined in JavaScript. We've set this up to test the complete loop of calling from JS to Rust and back to JS.

Building

We're now ready to build everything. There are a couple stages to this. We're going to implement these as simple npm scripts. Of course there are lots of fancier ways to do this.

The first stage is to compile the Rust code into wasm. Add the following to your package.json scripts section:

"build-wasm": "cargo build --target wasm32-unknown-unknown"

If you run npm run build-wasm, you should see that the file target/wasm32-unknown-unknown/debug/react_rust_wasm.wasm has been created.

Next we need to take the wasm file, and convert it into the final form that can be consumed by JavaScript, in addition to generating the proper JS files for wrapping everything. Add the following script to package.json:

"build-bindgen": "wasm-bindgen target/wasm32-unknown-unknown/debug/react_rust_wasm.wasm --out-dir build"

If you run npm run build-bindgen, you should see several files created in the build directory.

Note that wasm-bindgen even creates a react_rust_wasm.d.ts file for you in case you want to use TypeScript. Nice!

Ok, now all we need is a build script to do all the steps in order:

"build": "npm run build-wasm && npm run build-bindgen && npx webpack"

Your package.json scripts section should now look something like this:

"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build-wasm": "cargo build --target wasm32-unknown-unknown",
    "build-bindgen": "wasm-bindgen target/wasm32-unknown-unknown/debug/react_rust_wasm.wasm --out-dir build",
    "build": "npm run build-wasm && npm run build-bindgen && npx webpack"
  },

Running npm run build should work at this point. However, we still need to modify our JavaScript code to use our wasm module instead of the JS function.

Replace src/index.js with the following:

import React from 'react';
import ReactDOM from 'react-dom';

const wasm = import("../build/react_rust_wasm");

wasm.then(wasm => {

  const App = () => {
    return (
      <div>
        <h1>Hi there</h1>
        <button onClick={wasm.big_computation}>Run Computation</button>
      </div>
    );
  }

  ReactDOM.render(
    <App />,
    document.getElementById('root')
  );
});

There are a couple important changes. First, as of this writing, you need to use the import function, rather than the normal ES6 import syntax. It has something to do with not being able to load wasm asynchronously yet. In order to use this function, we need to enable a babel plugin. Install it with the following:

npm install --save-dev babel-plugin-syntax-dynamic-import

And add it to your .babelrc:

{
  "presets": [
    "react",
    "env",
  ],
  "plugins": ["syntax-dynamic-import"]
}

The import function returns a promise. That's why we need to call wasm.then in order to kick things off.

You should now be able to successfully run npm run build. Reload dist/index.html from a web server and you'll now see a message indicating it's running from Rust. And just like that, we're done!

Where to go from here

There are a lot of exciting things happening in the world of Rust+WebAssembly. This tutorial was aimed at React developers who just want to get their feet wet. Here are a few other resources you can check out if you want to go deeper.

  • Check out this great post to get an idea of the goals and vision for Rust/WASM.

  • rustwasm/team. This seems to be the central repository for keeping up with the current state of Rust and WebAssembly. It's a fantastic resource.

  • wasm-bindgen I highly recommend reading through their documention and examples. A good chunk of this tutorial is copied almost exactly from there. There are many more advanced features that can be used, such as using other JavaScript APIs, defining structs in Rust and using them in JS, passing those structs between Rust and JS, and many more.

  • stdweb is a bridging library that has some overlap with wasm-bindgen. stdweb has some nice features and macros for letting you write JavaScript inline in your Rust code, rather than just a simple bridge. wasm-bindgen seems to be more focused on bridging, and is designed to be used with languages other than just Rust in the future.

  • Yew is a Rust framework for writing client-side apps. It's heavily inspired by React, but it lets you write your app 100% in Rust.

  • The excellent New Rustacean podcast recently did an episode on Rust/WASM. I highly recommend giving it a listen.

2018-06-26T00:00:00.000Z Fullscreen Open in Tab
Make You a Static Site Generator

NOTE 2019-09-01: This is an old post that I never really finished and published. You may notice that it ends rather abruptly. Ironically, I remembered it while re-writing my website as a single page app using plain JS. But although I'm no longer using the method or tools described here for my site, I think it's an interesting snapshot of where I was at in my thinking last year, and I don't want to throw away the writing.

Introduction

There are lots of great static site generators out there. If you just want to get a blog or simple site up and running with minimal fuss, you can't go wrong with something like Hugo or Gatsby. However, SSGs can be very simple pieces of software, and I highly recommend writing your own to get more customization over your site, as an exercise, or even just for fun.

I recently decided to rewrite this website from scratch. I had previously been using Hugo, which is an excellent static site generator, and worked great for my needs. However, I wanted complete control, down to the tiniest detail. This is partially for the purposes of learning, but also because I'd like to add some custom features to my site eventually, like maybe doing some extreme load time optimizations, etc.

Anyway, I knew from the get-go there was a good chance I'd end up writing some sort of a SSG.

Spoilers: The final result of this exercise, [Anders'|Another] Static Site Generator (assg), can be found on GitHub.

Step One: Raw HTML

I decided to start completely barebones, with nothing but raw HTML, and add the minimal amount of functionality to get a working landing page and blog, with an eye towards eventually adding other pages such as links to projects, resume, and so forth.

I first wrote the landing page with nothing but a paragraph and a nav section with 2 links: one to the landing page itself ("Home"), and one to a not-yet-implemented "Blog" page. It was really refreshing to write HTML directly, without going through layers of JS framework. It's been a while, and was pretty nostalgic.

Reusing HTML

It didn't take long before I needed more functionality. When I went to implement the Blog page, I obviously wanted to reuse the nav section at the top of my landing page. I knew templates were probably a good direction to go for this. But what templating system to use? I spent some timing comparing different options. My primary constraint was that I wanted to write my SSG in Rust, both to continue learning the language, and because I think it's awesome. That ruled out many of the template systems, which are written for JavaScript and Ruby.

After reading up on templating systems a bit, I decided to go with Mustache, because it's simple, old, and supported across a wide variety of languages. Remember, I'm not looking for anything fancy, just basic HTML reuse/imports.

And, true to form, there's already a Rust library for parsing Mustache templates: rust-mustache. Using the library is pretty dead simple. You just give it a string of the template (read from a file in this case), and it renders it to HTML which you can then write to an output file. It even handles partials, which are a way for a template to include another template. This is actually exactly the functionality I needed. I want to render index.mustache and blog/index.mustache, and have them both include header.mustache which has the nav section. This worked great and required very little work on my part.

index.mustache now looks basically like this:

{{> header}}
<main>
  <p>
    ...boring words...
  </p>
</main>
{{> footer}}

The {{> header}} part is the syntax for including a partial named header.mustache. It can also handle relative paths like {{> ../partials/header}}.

Generating a List of Blog Posts

Ok, so at this point we are able to reuse little snippets of HTML, but all of the pages still need to be written manually. What I wanted was to be able to drop a bunch of Markdown files into a directory, and have the SSG automatically generate a page with a list of links to each of the posts.

The first stage of this is really simple. We just need to read the list of files in the indicated directory, and pass that list to the blog/index.mustache template. Mustache has the ability to repeat sections of HTML based on an array of input. In this case the array is the list of blog posts.

I had to make an important design decision here. Blog posts typically have a bit of metadata. This includes title, keywords, date/time, etc. Most SSGs I've seen include this information in a YAML section at the top of the file, which is known as front matter. This works pretty well. However, I wanted my posts to be pure Markdown (CommonMark, to be specific). Front matter is not part of the CommonMark spec. It also isn't guaranteed to render on places like GitHub, for example. Because of this, I decided to make each post a directory, rather than a file. The directories include a metadata.toml file and a post.md file. This worked great.

The metadata.toml file for the post you're reading looks something like this:

title = "Make You a Static Site Generator"
format = "markdown"
date = "2018-06-26"

Eventually, I'll support raw HTML in addition to Markdown. I started with Markdown because I already had a few posts written from my old site.

blog/index.mustache ended up looking like this:

{{> ../header}}
<main>
  <h1>Posts</h1>
  <ul>
    {{#posts}}
    <li>
      <a href={{url}}>{{date}} | <strong>{{title}}</strong></a>
    </li>
    {{/posts}}
  </ul>
</main>
{{> ../footer}}

The {{#posts}} and {{/posts}} are the Mustache syntax for rendering a list of elements from the array named posts, which is passed in when you render the template.

As you can see, the date and title get passed through, and a link is generated for each post. The URL for each link is generated and passed in as well.

Rendering Markdown

So now we've generated an HTML page which lists all our blog posts. Now we need to actually generate a page for each post, starting with the post.md Markdown file and ending up with a static HTML page.

Here's where the implementation started to get more interesting (ie challenging). There are a few different Markdown rendering libraries available for Rust. I chose to use pulldown-cmark, which seems to be the most popular (it's used by Gutenberg, a popular SSG written in Rust). Once again, using this library was pretty easy. Just give it a string of Markdown, and it renders a sensible HTML string for you. The problem I ran into was that the built-in syntax highlighting was very minimal. Rather than try some of the other Markdown renderers to see if they were any better, I decided it would be fun to try and handle the highlighting more manually, using the syntect library (once again, this is a popular choice, and it used by Gutenberg).

Fortunately, pulldown-cmark is well designed for this sort of customization. Basically, when parsing a Markdown file, it gives you a stream of events which represent the beginning, end, and content of each type of CommonMark element encountered. You can either let it handle each type of event the default way, or override specific types of events to customize the behavior. This is exactly how Gutenberg works, and I found their source very helpful for solving my simpler problem. In my case, I wanted to override how it handles CodeBlock events, to use syntect instead of the built-in highlighting.

My (quite hacky) code ended up looking something like this:

let parser = Parser::new(&markdown_text).map(|event| {

    match event {
        Event::Start(Tag::CodeBlock(language)) => {
            in_code_block = true;
            syntax_name = lang_map.get(&language.to_string())
                .expect(&format!("{:?} not in language map", language));
            Event::Html(Owned("<div class='code'>".to_string()))
        },
        Event::End(Tag::CodeBlock(_)) => {
            in_code_block = false;

            let syntax = ss.find_syntax_by_name(
                syntax_name.as_str()).unwrap();

            let mut html = highlighted_snippet_for_string(
                &code.to_string(), syntax, theme);

            html.push_str("</div>");

            code = String::new();
            Event::Html(Owned(html))
        },
        Event::Text(text) => {

            if in_code_block {
                code += &text.to_string();
                Event::Text(Owned("".to_string()))
            }
            else {
                Event::Text(text)
            }
        }
        _ => event
    }
});
2018-06-19T00:00:00.000Z Fullscreen Open in Tab
Deploying a Static Rust App in a Barebones Docker Container

Introduction

This post will cover how to get a simple static Rust executable running inside a barebones Docker container. This allows you to compile static Rust binaries for a single platform (Docker, or more specifically Linux x86), and run them on any operating system which can run Docker. Although Rust already compiles to a lot of platforms , I think this method could still be useful in some cases.

There are already other great blog posts and examples on this topic. This can be seen as a micro-tutorial to get the most basic version working. I highly recommend checking out the other resources to get something actually useful up and running.

Full source code for this post is available on GitHub. As you can see, there's not much to it.

Prerequisites

  • rustup, Rust, and cargo. Note that the best way to install Rust and cargo is using rustup, so you just need to download that and follow the instructions and it will download and install the other two for you.
  • musl libc on your system with the musl-gcc command is on your PATH. musl is a lightweight libc implementation that works well for static linking. It is supported by Rust.
  • Docker installation

Rust stuff

First create a new Rust executable project:

cargo new --bin rust_docker_barebones

Navigate to that directory.

We're going to build our executable in release mode, and tell it to use musl to output a static binary that doesn't depend on any dynamically linked libraries.

First we need to install the musl target for Rust:

rustup target install x86_64-unknown-linux-musl

Then we can build it:

cargo build --release --target=x86_64-unknown-linux-musl

That's it for Rust.

Docker stuff

Create the following Dockerfile in the Rust project directory:

FROM scratch

COPY target/x86_64-unknown-linux-musl/release/rust_docker_barebones /rust_docker_barebones

ENTRYPOINT ["/rust_docker_barebones"]

This starts with the most stripped-down Docker image, called the scratch image. Maybe you've used the excellent tiny Alpine image? This is even more minimal than that. Basically the only thing the scratch image can do is run a Linux x86 executable file. The Dockerfile copies our Rust release binary into the image at the location /rust_docker_barebones. Finally it sets that location as the default executable to call when the Docker container is launched.

Now build the Docker image:

docker build -t rust_docker_barebones .

And finally try running it:

docker run rust_docker_barebones

You should see the default Rust "Hello, World!" output.

And that's it!

Next steps

There's a lot you can do to improve this. Here's a couple ideas:

Optimize Docker image size

The resulting Docker image is >4.5MiB in size. This is mostly due to the Rust executable. Optimizing this is beyond the scope of this post, but one simple step is to run the strip program on the binary to remove debug symbols. A quick test on my system yielded a 539k image size. To go deeper, I'd start with this post.

Make a full app

If you want to deploy something actually useful, check out this post which goes into the details of getting a web service working. You could use that information along with a previous post of mine to deploy "pseudo-desktop" applications across all operating systems capable of running Docker and a web browser, without needing to compile for each OS. You just need to target Docker and the browser.

2018-04-04T00:00:00.000Z Fullscreen Open in Tab
Making a 100% Statically-Linked, Single-File Web App with React and Rust

Update 2019-05-22: There is now a Russian translation of this post here. Thanks Vlad!

This tutorial will cover the basics of creating a minimal React app which can be deployed as a statically-linked Rust binary. What this accomplishes is having all of your code, including HTML, JavaScript, CSS, and Rust, packaged into a single file that will run on pretty much any 64-bit Linux system, regardless of the kernel version or installed libraries.

Complete source is available on GitHub.

Why?

  • Simpler deployment: Having a static binary means you just have to copy the file to your servers and run it.
  • Cross-platform native GUI apps: One of the biggest challenges in creating a cross-platform GUI app is working with a GUI library that targets all the platforms you're interested in. The approach here lets you leverage the user's browser for this purpose. This is somewhat similar to what Electron accomplishes, but your backend is in Rust rather than JavaScript, and the user navigates to the app from their browser. There are certainly tradeoffs here, but it can work well for some apps. I was first introduced to this approach by syncthing, which is written in go but does a similar thing.
  • Because I've been obsessed with static linking for as long as I can remember and I'm not really sure why.

Prerequisites

Initialize the project directory

We're going to let cargo manage the project directory for us. Run the following commands:

cargo new --bin react_rust_webapp
cd react_rust_webapp

Create the React app

First install React, Babel, and Webpack:

mkdir ui
cd ui
npm init -y
npm install --save react react-dom
npm install --save-dev babel-core babel-loader babel-preset-env babel-preset-react webpack webpack-cli

Then create the source files:

mkdir dist
touch dist/index.html
mkdir src
touch src/index.js

Put the following content in dist/index.html:

<!doctype html>
<html>
  <head>
    <title>Static React and Rust</title>
  </head>
  <body>
    <div id="root"></div>
    <script src="/bundle.js"></script>
  </body>
</html>

And set src/index.js to the following:

import React from 'react';
import ReactDOM from 'react-dom';

ReactDOM.render(
  <h1>Hi there</h1>,
  document.getElementById('root')
);

We will also need a .babelrc file:

{
  "presets": [
    "react",
    "env",
  ],
}

And a webpack.config.js file:

const path = require('path');

module.exports = {
  entry: './src/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        exclude: /node_modules/,
        use: {
          loader: 'babel-loader',
        }
      }
    ]
  }
};

You should now be able to test that the frontend stuff is working. Run:

npx webpack --mode development

This will generate dist/bundle.js. If you start a web server in the dist directory you should be able to successfully serve the example content.

Now for the Rust part.

Setting up the Rust backend

Move up to the react_rust_webapp directory:

cd ..

First thing we need to do is install a web framework. I found Rouille to be great for this simple example. I also really love Rocket.

Add rouille to your Cargo.toml dependencies:

[package]
name = "react_rust_webapp"
version = "0.1.0"

[dependencies]
rouille = "2.1.0"

Now modify src/main.rs to have the following content:

#[macro_use]
extern crate rouille;

use rouille::Response;

fn main() {
    let index = include_str!("../ui/dist/index.html");
    let bundle = include_str!("../ui/dist/bundle.js");
    rouille::start_server("0.0.0.0:5000", move |request| {

        let response = router!(request,
            (GET) ["/"] => {
                Response::html(index)
            },
            (GET) ["/bundle.js"] => {
                Response::text(bundle)
            },
            _ => {
                Response::empty_404()
            }
        );

        response
    });
}

What is this doing?

At compile time, include_str! reads the indicated file and inserts the contents as a static string into the compiled binary. This string is then available as a normal variable.

The rouille code just sets up two HTTP endpoints, "/" and "/bundle.js". Instead of returning the files from the filesystem as we'd typically do with a web app, we're returning the contents of the statics strings from the binary.

To learn more about using Rouille to do more advanced stuff refer to their docs.

Running it

Alright, now if all went well we should be able to run it. Make sure ui/dist/bundle.js has already been generated as instructed above. Then run:

cargo run

It should start a server on port 5000. If you navigate to http://localhost:5000 in your browser you should see "Hi there".

Static linking

This part can be skipped if you don't need 100% static linking. Rust statically links most libraries by default anyway, except for things like libc.

If you do want to proceed, you'll first need to install musl libc on your system and ensure the musl-gcc command is on your PATH.

Then, rerun cargo as follows:

cargo run --target=x86_64-unknown-linux-musl

Production build

For smaller binaries, build bundle.js as follows (from with ui/):

npx webpack --mode production

And run cargo as follows:

cargo build --release --target=x86_64-unknown-linux-musl

You should end up with a statically linked binary in react_rust_webapp/target/x86_64-unknown-linux-musl/release/

Conclusion

This is just the basics. There's a lot more you could do with this, including:

  • Use build.rs to automatically build the React app when you compile Rust.
  • Take the port number from the command line
  • Serialized (probably JSON) requests and responses
  • Run webpack as an npm script command
  • Target other OSes. I haven't tried yet, but this should be mostly transferable to MacOS and Windows, thanks to the awesomeness that is Rust/Cargo and the universal availability of web browsers.
2014-12-17T17:30:55.000Z Fullscreen Open in Tab
Fix for Vimium that Stopped Working

I love the Vimium extension for Chrome. It basically provides VIM keybindings for Chrome. But some of the bindings randomly quit working a while back, probably after a Chrome update. A quick search didn't yield a simple fix, so I just put up with it for an embarrassingly long time. Finally today I did a bit more digging. Some of the issues on github seemed to indicate local Chrome data might get messed up from updates. My solution was to delete ~/.config/google-chrome (actually moved it to ~/.config/google-chrome.bak just in case). I believe this basically removes all the local data for chrome, as if you had just installed it for the first time. After starting Chrome back up and logging into my account, Vimium is working again! I'm running Ubuntu 14.04 with Chrome 38 as of this writing.

2014-12-01T22:43:09.000Z Fullscreen Open in Tab
How to Install the Google Play Store on Your Amazon Fire Phone

Background

The last few days Amazon has been having a "fire sale" selling their fire phone for 199USD unlocked and off contract, plus a year of Amazon prime (~99USD). Given the hardware this was a little to good for me to pass up. Arguably the biggest problem with the Fire Phone (and all Amazon's devices) is that it doesn't have access to Google's Play Store, and the OS and bootloader are locked down tight which makes it somewhere between difficult and impossible to install ROMs at the moment.

One short term solution is to sideload the Google Play Store in order to install some of the missing apps. I spent a solid couple of hours trying to figure this out so thought I would summarize what I've learned. I can't take credit for this information. It's basically a combination of this XDA forum thread and this blog post. Those guys are the real wizards. I'm repeating the information here to make it easier for people with the Fire Phone to find. I can confirm that these steps work on the Fire Phone running FireOS 3.6.8.

Steps

Download the files

Download each of the following APK files:

  1. Google Service Framework
  2. Google Login Service
  3. Google Play Services
  4. Google Play Store

Tansfer APKs to Phone

I swear this was the hardest part. You need to find a way to get the files onto the phone. If you can plug the phone in and transfer over USB that would probably be the easiest. I'm using Linux and didn't want to go throught the trouble of figuring that out. I ended up sideloading Dropbox and transferring them that way.

Enable APK App Installation

On the phone, go into Settings > Applications & Parental Controls > Prevent non-Amazon app installation and flip the App Installation Switch to ON.

Install File Explorer

Open the Amazon App Store and install ES File Explorer or another file browsing app.

Install the APKs

Using the file explorer (or Dropbox, etc), navigate to the files you downloaded, and install them one by one in the same order you downloaded them. Be sure to reboot between each installation. I don't know how important that is but that's what I did and it worked. After they are all installed you should be able to launch the Play Store, log in with your Google account, and start installing stuff. Currently I'm using Hangouts and Gmail and they seem to work fine. Maps basically works but has some corrupted visuals. YMMV.

NOTE: Initially I wasn't able to log into my google account because I had Google's 2 factor authentication enabled. I disabled it and it worked fine. If anyone finds a workaround let me know.

Enjoy!

2014-08-30T22:35:24.000Z Fullscreen Open in Tab
Chrome Extension PubSub

This tutorial builds the same Chrome extension popup as my Chrome Extension Content Script Stylesheet Isolation tutorial, but uses the chromeps pubsub module to make things easier. For more detailed information, I highly recommend looking through that tutorial.

You can get all the code for this tutorial from https://github.com/anderspitman/chrome-extension-pubsub-example

Background Info

When writing chrome extensions with content scripts, you often find yourself doing a lot of message passing. If your content scripts include iframes, things get even more complicated because in order to communicate between the content scripts and their iframes, you have to ferry the messages back and forth using the background page. This can get messy very quickly. This tutorial serves as a simple but complete example of how to use chromeps to help with these issues.

Objective

To recap from the previous tutorial: we'll be creating a simple chrome extension that uses a content scripts with a popup that loads on every page the user opens. When the user clicks outside the popup it disappears. This demonstrates the different types of message passing mentioned above.

Install chromeps

Create a new empty directory for you extension and download chromeps.js into it. You can get it from https://github.com/anderspitman/chromeps

Create a new Chrome Extension

Add the following manifest.json:

{
  "manifest_version": 2,
  "name": "Chrome Extension PubSub",
  "description": "This extension demonstrates Content Script CSS Isolation with chromeps",
  "version": "1.0",
  "background" : {
    "scripts" : ["chromeps.js"]
  },
  "content_scripts" : [
    {
      "matches" : ["<all_urls>", "http://*/*", "https://*/*"],
      "css" : ["content.css"],
      "js" : ["chromeps.js", "content.js"]
    }
  ],
  "web_accessible_resources" : ["popup.html"]
}

Notice that we are loading chromeps.js into the background page (for this example we actually don't have any other logic for the background page), and also loading it each time a content script is loaded, which in this case means any time the user opens a web page.

Add Content Script and Style

The manifest references several files that we will need to create. Let's start with content.js:

var iframe = document.createElement('iframe');
iframe.src = chrome.extension.getURL("popup.html");
iframe.className = 'css-isolation-popup';
iframe.frameBorder = 0;
document.body.appendChild(iframe);

chromeps.subscribe('commands', function(message) {
  if (message == 'hide_popup') {
    iframe.style.display = 'none';
  }
});

Here we're creating the iframe that will hold our popup. Try to make sure the className is something unique because this is the one style that may still interfere with the page the user visits. I'm using css-isolation-popup. That style comes from content.css, which is referenced in the manifest. Let's add it real quick:

.css-isolation-popup {
  position: fixed;
  top: 0px;
  left: 0px;
  width: 100%;
  height: 100%;
}

I'm basically just giving the popup free reign over the entire window. It's fine in my case because I have a shaded overlay that surrounds the actual popup. You might need to tweak this for your needs.

Note that we've used chromeps to subscribe to the "commands" topic, so our callback will be invoked any time a message on that topic is published anywhere in chrome.

Add Popup

Now let's add the actual popup files, popup.html and popup.js:

<!doctype html>
<html>

<head>
<style>
.overlay {
  position: fixed;
  top: 0%;
  left: 0%;
  width: 100%;
  height: 100%;
  background-color: black;
  z-index: 1000;
  opacity: .80;
}
.wrapper {
  position: fixed;
  top: 50%;
  left: 50%;
  width: 400px;
  height: 200px;
  margin-left: -200px;
  margin-top: -100px;
  text-align: center;
  background-color:#FFFFFF;
  z-index: 1100;
}
</style>
</head>

<body>
<div class='overlay'></div>
<div class='wrapper'>
  <h1>Click outside to hide</h1>
</div>
<script src='chromeps.js'></script>
<script src='popup.js'></script>
</body>

</html>

Mostly just styling. The overlay is a shaded region which will fill the window surrounding our small popup. The popup lives inside the wrapper.

We're sourcing popup.js from within popup.html. There's no need to add it in the manifest. We're also including chromeps.js.

var overlay = document.querySelector('.overlay');
overlay.addEventListener('click', function() {
  chromeps.publish('commands', 'hide_popup');
});

Here we're handling when the user clicks outside the popup, in the overlay region. When this happens we want to publish a signal to the content script to hide our iframe.

Conclusion

And that's it. If you compare this to the previous tutorial, you'll notice that we don't need to explicitly create a background page just for passing messages, since chromeps takes care of all the heavy lifting for us.

2014-08-04T08:52:47.000Z Fullscreen Open in Tab
Chrome Extension Content Script Stylesheet Isolation

UPDATE 2014-08-30: For a way to handle message passing using the chromeps pubsub module, see this post.

Background Info

When writing Chrome extensions, if you want to inject HTML and CSS into pages the user is visiting, you use what's called a content script. One reason you might want to do this would be to build a custom popup that activates on certain pages.

One of the biggest problems people run in to is CSS corruption. The way that content scripts work means that the CSS from your content script is merged with the CSS from the page the user is visiting. This means that the page can corrupt what your popup looks like, and the popup might mess up the page. See here. The ideal situation is for your content script to run in a completely isolated environment. Unfortunately this isn't straightfoward. There are a couple different options. The choice came down to IFrames vs Shadow DOM. I decided to try Shadow DOM first.

The Shadow DOM is (as of this writing) a new technology that is part of the upcoming Web Components. It's very cool stuff. When first trying to implement my popup I tried using the Shadow DOM, but I ran into problems when trying to run JavaScript in my popup. This led me to Custom Elements, another web components feature. Since both shadow DOM and custom elements are very new and not universally supported, at this point I decided to try Polymer. Polymer is a project that provides nice wrappers around web components features, as well as polyfills for features that aren't implemented natively yet. Polymer turned out to be awesome, and did exactly what I need, but unfortunately there is a bug in the current version of chrome that prevents custom elements from working in content scripts. Back to square one.

Alright, that leaves us with the infamous iframe. This is the solution that worked for me. In the end it was pretty strightforward. There are a couple caveats, but nothing too bad. I'll run through the basics of how I implemented it.

All of the code used in this example is available from the following github repo: https://github.com/anderspitman/chrome-extension-css-isolation-example

Create a new Chrome Extension

Create an empty directory and add the following manifest.json:

{
  "manifest_version": 2,
  "name": "CSS Isolation",
  "description": "This extension demonstrates Content Script CSS Isolation",
  "version": "1.0",
  "background" : {
    "scripts" : ["background.js"]
  },
  "content_scripts" : [
    {
      "matches" : ["<all_urls>", "http://*/*", "https://*/*"],
      "css" : ["content.css"],
      "js" : ["content.js"]
    }
  ],
  "web_accessible_resources" : ["popup.html"]
}

Add Content Script and Style

The manifest references several files that we will need to create. Let's start with content.js:

var iframe = document.createElement('iframe');
iframe.src = chrome.extension.getURL("popup.html");
iframe.className = 'css-isolation-popup';
iframe.frameBorder = 0;
document.body.appendChild(iframe);

chrome.runtime.onMessage.addListener(function(message) {
  iframe.style.display = 'none';
});

Here we're creating the iframe that will hold our popup. Try to make sure the className is something unique because this is the one style that may still interfere with the page the user visits. I'm using css-isolation-popup. That style comes from content.css, which is referenced in the manifest. Let's add it real quick:

.css-isolation-popup {
  position: fixed;
  top: 0px;
  left: 0px;
  width: 100%;
  height: 100%;
}

I'm basically just giving the popup free reign over the entire window. It's fine in my case because I have a shaded overlay that surrounds the actual popup. You might need to tweak this for your needs.

This is Important

One other thing you'll notice from content.js is the chrome message handler. This brings up a very important point and huge caveat of content scripts in general, and especially using iframes within content scripts. You cannot directly access code within an iframe from other parts of your extension. It must use the chrome message passing to transfer information. In addition to this, the iframe cannot pass messages directly to the content script. Therefore, the iframe and content script must communicate with each other through the background page. This is explained in more detail in this excellent post. I think this will be much more clear once we finish our example.

Add Popup

Now let's add the actual popup files, popup.html and popup.js:

<!doctype html>
<html>

<head>
<style>
.overlay {
  position: fixed;
  top: 0%;
  left: 0%;
  width: 100%;
  height: 100%;
  background-color: black;
  z-index: 1000;
  opacity: .80;
}
.wrapper {
  position: fixed;
  top: 50%;
  left: 50%;
  width: 400px;
  height: 200px;
  margin-left: -200px;
  margin-top: -100px;
  text-align: center;
  background-color:#FFFFFF;
  z-index: 1100;
}
</style>
</head>

<body>
<div class='overlay'></div>
<div class='wrapper'>
  <h1>Click outside to hide</h1>
</div>
<script src='popup.js'></script>
</body>

</html>

Mostly just styling. The overlay is a shaded region which will fill the window surrounding our small popup. The popup lives inside the wrapper. I want to stress the fact that everything in here is completely isolated from whatever page the user is visiting. We can name our classes whatever we want with no fear of name collisions from the outside world. Perfect!

We're sourcing popup.js from within popup.html. There's no need to add it in the manifest.

chrome.runtime.onMessage.addListener(function(message) {
  if (message == 'hide_popup') {
    iframe.style.display = 'none';
  }
});

Here we're handling when the user clicks outside the popup, in the overlay region. When this happens we want to signal the content script to hide our iframe. But remember what we said earlier: we can't communicate directly with the content script, so we need to send the message to the background page and have it forward it to the content script.

Add Background Page

Add the background page as follows:

chrome.runtime.onMessage.addListener(function(message, sender) {
  chrome.tabs.sendMessage(sender.tab.id, message);
});

Literally all it does is repeat whatever messages it receives back out to the tab it received it from. It's worth noting here that both content.js and popup.js will receive the forwarded message, so it's actually being reflected back to the popup where it originated.

So at the end of the day, here's what happens:

  1. User clicks shaded region
  2. popup.js detects the click and sends the message hide_popup to background.js
  3. background.js receives the message, and broadcasts it to the tab where it originated
  4. content.js receives the message, and if it is hide_popup it hides the iframe

Conclusion

And there you have it! Load this puppy into chrome, and any page you visit should display a popup. Clicking in the faded area around it makes it disappear. This is a barebones example to be sure but it should be fairly straightforward to augment with additional functionality.

2014-07-21T10:41:56.000Z Fullscreen Open in Tab
Asterisk ARI Quickstart Tutorial in Python

The purpose of this post is to get Asterisk users up and running with the Asterisk 12 ARI with Python as quickly as possible. I'm assuming:

  • You know what the ARI is
  • You know at least the basics of using Asterisk
  • You have Asterisk 12 installed
  • You have Python with pip installed (preferably inside a virtualenv)

I followed this other tutorial closely, particularly the implementation of the websocket stuff:

https://wiki.asterisk.org/wiki/display/AST/Getting+Started+with+ARI

For more info refer to the Official ARI Page

Note that I'm implementing my own interface for the REST calls, since it's a simple example. For a full blown application you'll probably want to use something like python-ari

Set up Asterisk

Enable HTTP server

Asterisk's HTTP server is disabled by default. Open http.conf and make sure the following are uncommented.

enabled=yes
bindaddr=127.0.0.1

Enable and set up ARI

Open ari.conf and uncomment:

enabled=yes

And add the following to the end of the file:

[hey]
type=user
password=peekaboo

Create an extension

We need an entry point for Asterisk to pass control into our ARI app. Just set up an extension that opens the Statis app as shown below. I'm using extension 100 in the example extensions.conf:

[default]
exten => 100,1,Noop()
      same => n,Stasis(hello,world) ; hello is the name of the application
                                    ; world is its argument list
      same => n,Hangup()

Get the Code

Either clone my repo at https://github.com/anderspitman/ari-quickstart or just copy and paste the script from below.

You'll need to install requests and websocket-client. If you cloned the repo just do:

pip install -r requirements.txt

Otherwise install them manually:

pip install requests websocket-client
#!/usr/bin/env python

import json
import sys
import websocket
import threading
import Queue
import requests


class ARIInterface(object):
    def __init__(self, server_addr, username, password):
        self._req_base = "http://%s:8088/ari/" % server_addr
        self._username = username
        self._password = password

    def answer_call(self, channel_id):
        req_str = self._req_base+"channels/%s/answer" % channel_id
        self._send_post_request(req_str)

    def play_sound(self, channel_id, sound_name):
        req_str = self._req_base+("channels/%s/play?media=sound:%s" % (channel_id, sound_name))
        self._send_post_request(req_str)

    def _send_post_request(self, req_str):
        r = requests.post(req_str, auth=(self._username, self._password))


class ARIApp(object):
    def __init__(self, server_addr):
        app_name = 'hello'
        username = 'hey'
        password = 'peekaboo'
        url = "ws://%s:8088/ari/events?app=%s&api_key=%s:%s" % (server_addr, app_name, username, password)
        ari = ARIInterface(server_addr, username, password)
        ws = websocket.create_connection(url)

        try:
            for event_str in iter(lambda: ws.recv(), None):
                event_json = json.loads(event_str)

                json.dump(event_json, sys.stdout, indent=2, sort_keys=True,
                          separators=(',', ': '))
                print("\n\nWebsocket event***************************************************\n")

                if event_json['type'] == 'StasisStart':
                    ari.answer_call(event_json['channel']['id'])
                    ari.play_sound(event_json['channel']['id'], 'tt-monkeys')
        except websocket.WebSocketConnectionClosedException:
            print("Websocket connection closed")
        except KeyboardInterrupt:
            print("Keyboard interrupt")
        finally:
            if ws:
                ws.close()


if __name__ == "__main__":
    app = ARIApp('localhost')

Try it Out

Start/Restart Asterisk and once it's up run the script:

python ari-quickstart.py

If it doesn't throw any exceptions it should be connected and listening for ARI events. Dial the Statis extension (100 in my case) and you should hear monkeys.

The script should be easy to modify to add more functionality. It's a good starting point for creating more full featured apps. The biggest thing to worry about is that there's a good chance you won't want your app blocking on the websocket receive calls. A simple solution is to handle events in a separate thread and use a Python queue to pass the received messages in.

2014-06-15T22:40:00.000Z Fullscreen Open in Tab
Setting up an IPython Development Environment from Source

I recently decided to start hacking on the excellent IPython project. I wanted to have full control over the versions of all the software involved, which meant compiling Python from source. This guide is intended to take one through the entire process of setting up a custom Python build with virtualenv in the least number of steps possible, with the final goal of building a virtualenv specifically for IPython dev work.

For this guide I'm using Mint 17 (based on Ubuntu 14.04). Most of the commands should be very similar for most modern Linux systems. The biggest things that will be different is installing build dependencies. Usually on debian based system that will involve something along the lines of:

sudo apt-get install build-essential

And maybe a few other packages.

Building and Installing Python

For this example I will be installing to /opt/python277. First, create the directory:

sudo mkdir -p /opt/python277

Now we'll get the python source. I want to use the latest Python 2.7. As of this writing it's 2.7.7. get it here. You should be able to follow these instructions with any 2.7.x version. 3.x should work as well, but might be a little different.

Extract the downloaded tarball and go into the directory:

tar -xvf Python-2.7.7.tar.xz
cd Python-2.7.7

We will now configure Python source:

export LD_RUN_PATH=/opt/python277/lib
./configure --prefix=/opt/python277 --enable-shared

What we're doing here is telling python to install to /opt/python277 and to be available as a shared library. This is important for certain packages such as PySide, which we'll install later. The LD_RUN_PATH tells it which library our python executable should link against at runtime. If we didn't set that environment variable, it would link against the system's python library, which causes all sorts of confusion.

Now make and install:

make
sudo make install

This will install a fresh python into /opt/python277. You can test it by running

/opt/python277/bin/python

You should get a python 2.7.7 prompt.

Setting up Package Management and virtualenv

The next thing we want to do is get pip up and running as quickly as possible, so that we can use it for all our package management. We'll download pip directory from pypi. It depends on setuptools, so download that here

Then extract and install it:

tar -xzvf setuptools-4.0.1.tar.gz
cd setuptools-4.0.1/
sudo /opt/python277/bin/python setup.py install

Now do the same thing with pip. The one I used is here

tar -xzvf pip-1.5.6.tar.gz
cd pip-1.5.6/
sudo /opt/python277/bin/python setup.py install

Alright, we should now be able to install most python packages from pypi simply with pip now. The first thing we need is virtualenv. If you're not familiar with virtualenv, it's awesome. Check it out here. Install it with:

sudo /opt/python277/bin/pip install virtualenv

Set up IPython virtualenv

We'll now create a virtualenv just for ipython development. I like to keep my virtualenvs in ~/virt_python.

mkdir ~/virt_python
cd ~/virt_python

Create the virtualenv. I'll call it "ipython-dev":

/opt/python277/bin/virtualenv ipython-dev

Activate it:

source ipython-dev/bin/activate

Now when we run python or pip it will use the executables in ~/virt_python/ipython-dev, and any packages we install with pip will only affect our ipython virtualenv.

Install Dependencies

The IPython dependencies we need will depend on which parts of IPython you want to work on. For example, to run the notebook we'll want numpy, ZeroMQ, jinja, and tornado. It's now simply a matter of using pip:

pip install numpy pyzmq jinja2 tornado

Alternatively you can install the dependencies for a specific IPython console automatically as explained below.

I want to run the IPython QT console, which depends on QT. I like the PySide python bindings. First install QT. On my system I needed:

sudo apt-get install qt4-default

Then install PySide:

export PYTHON_INCLUDE_DIRS=/opt/python277/lib
pip install pyside

We need to set PYTHON_INCLUDE_DIRS so that qmake knows what to build against.

Get the IPython Source

Clone the repository from github into ~/ipython-dev:

cd
git clone https://github.com/ipython/ipython ipython-dev

Install dependencies for the IPython notebook with the following:

pip install -e ".[notebook]"

This also creates an IPython executable in your virtualenv so as long as it is active you can simply run

ipython

to run IPython from your development source.

You should now be set to start hacking!