Yes, my credit card number *does* have spaces!

12 July 2013, by Ben    28 comments

Credit CardOkay, so here’s a little rant (by a fellow developer) about web forms that accept credit card numbers. Or rather, web forms that don’t accept them very well.

Many card number input boxes are limited to 16 characters, meaning when I get up to “1234 5678 1234 5” and then try to type the last three digits … BANG, input box full, no more typie. I have to go back and delete the spaces, making it harder to read and check. I had this just today when paying for my Highrise account (though I’m not really picking on 37signals — it happens on many payment sites).

The other one that I get fairly often is when a site lets me enter the number with spaces, but then the server-side check comes back and says “invalid credit card number”. Um, no it’s not invalid — I’m typing it exactly as it appears on my card.

C’mon, folks, we’re programmers! Stripping out non-digits is the kind of thing computers can do in nano- or micro-seconds. What exactly is so hard about adding one line of code?

    card_number = ''.join(c for c in card_number if c.isdigit())

If you’re taking money that I want to give you, please at least let me enter my card number as it appears on my card — digit groups separated by spaces. Stripping really isn’t that hard! :-)

Masterminds of Programming

21 January 2013, by Ben    13 comments

Masterminds of Programming book cover

My wife gave me a real geek book for Christmas: Masterminds of Programming by two guys named Federico Biancuzzi and Shane Warden. In it they interview the creators of 17 well-known or historically important programming languages.


The book was a very good read, partly because not all the questions were about the languages themselves. The interviewers seemed very knowledgeable, and were able to spring-board from discussing the details of a language to talking about other software concepts that were important to its creator. Like software engineering practices, computer science education, software bloat, debugging, etc. The languages that everyone’s heard of and used are of course in there: C++, Java, C#, Python, Objective-C, Perl, and BASIC. There are a few missing — for example, the Japanese creator of Ruby didn’t feel comfortable being interviewed in English, and the publishers considered translation too expensive.

But what I really liked were interviews about some of the domain-specific languages, such as SQL, AWK, and PostScript. As well as some of the languages that were further off the beaten track, like APL, Haskell, ML, Eiffel, Lua, and Forth. The one thing I didn’t go for was the 60 pages with the UML folks. That could have been cut, or at least condensed — half of it (somewhat ironically) was them talking about how UML had gotten too big.

If you’re a programmer, definitely go and buy the book (the authors paid me 0x0000 to say that). But in the meantime, below are a few more specific notes and quotes from the individual interviews.

This review got rather long. From here on, it’s less of a real review, and more my “quotes and notes” on the individual chapters. I hope you’ll find it interesting, but for best results, click to go to the languages you’re interested in: C++, Python, APL, Forth, BASIC, AWK, Lua, Haskell, ML, SQL, Objective-C, Java, C#, UML, Perl, PostScript, Eiffel.

C++, Bjarne Stroustrup

C++ might be one of the least exciting languages on the planet, but the interview wasn’t too bad.

I knew RAII was big in C++, and Stroustrup plugged it two or three times in this fairly short interview. Another thing I found interesting was his comment that “C++ is not and was never meant to be just an object-oriented programming language … the idea was and is to support multiple programming styles”. Stroustrup’s very big on generic programming with templates, and he badgered Java and C# for adding generics so late in their respective games.

He does note that “the successes at community building around C++ have been too few and too limited, given the size of the community … why hasn’t there been a central repository for C++ libraries since 1986 or so?” A very good thought for budding language designers today. A PyPI or a CPAN for C++ would have been a very good idea.

As usual, though, he sees C++ a little too much as the solution for everything (for example, “I have never seen a program that could be written better in C than in C++”). I think in the long run this works against him.

Python, Guido van Rossum

One thing Python’s creator talks about is how folks are always asking to add new features to the languages, but to avoid it becoming a huge hodge-podge, you’ve got to do an awful lot of pushing back. “Telling people you can already do that and here is how is a first line of defense,” he says, going on to describe stages two, three, and four before he considers a feature worth including into the core. In fact, this is something that came up many times in the book. To keep things sane and simple, you’ve got to stick to your vision, and say no a lot.

Relatedly, Guido notes, “If a user [rather than a Python developer] proposes a new feature, it is rarely a success, since without a thorough understanding of the implementation (and of language design and implementation in general) it is nearly impossible to properly propose a new feature. We like to ask users to explain their problems without having a specific solution in mind, and then the developers will propose solutions and discuss the merits of different alternatives with the users.”

After just reading Stroustrup’s fairly involved approach to testing, Guido’s approach seemed almost primitive — though much more in line with Python’s philosophy, I think: “When writing your basic pure algorithmic code, unit tests are usually great, but when writing code that is highly interactive or interfaces to legacy APIs, I often end up doing a lot of manual testing, assisted by command-line history in the shell or page-reload in the browser.” I know the feeling — when developing a web app, you usually don’t have the luxury of building full-fledged testing systems.

One piece of great advice is that early on when you only have a few users, fix things drastically as soon as you notice a problem. He relates an anecdote about Make: “Stuart Feldman, the original author of “Make” in Unix v7, was asked to change the dependence of the Makefile syntax on hard tab characters. His response was something along the lines that he agreed tab was a problem, but that it was too late to fix since there were already a dozen or so users.”

APL, Adin Falkoff

APL is almost certainly the strangest-looking real language you’ll come across. It uses lots of mathematical symbols instead of ASCII-based keywords, partly for conciseness, partly to make it more in line with maths usage. For example, here’s a one-liner implementation of the Game of Life in APL:

Conway's Game of Life in APL

Yes, Falkoff admits, it takes a while to get the hang of the notation. The weird thing is, this is in 1964, years before Unicode, and originally you had to program APL using a special keyboard.

Anyway, despite that, it’s a very interesting language in that it’s array-oriented. So when parallel computing and Single Instruction, Multiple Data came along, APL folks updated their compilers, and all existing APL programs were magically faster without any tweaking. Try that with C’s semantics.

Forth, Chuck Moore

Forth is a small and very nifty language that holds a special place in my heart. :-) It’s quirky and minimalistic, though, and so is it’s creator.

He’s an extremist, but also sometimes half right. For example, “Operating systems are dauntingly complex and totally unnecessary. It’s a brilliant thing Bill Gates has done in selling the world on the notion of operating systems. It’s probably the greatest con the world has ever seen.” And further on, “Compilers are probably the worst code ever written. They are written by someone who has never written a compiler before and will never do so again.”

Despite the extremism in the quotes above, there’s a lot folks could learn from Forth’s KISS approach, and a lot of good insight Moore has to share.

BASIC, Tom Kurtz

I felt the BASIC interview wasn’t the greatest. Sometimes it seemed Kurtz didn’t really know what he was talking about, for instance this paragraph, “I found Visual Basic relatively easy to use. I doubt that anyone outside of Microsoft would define VB as an object-oriented language. As a matter of fact, True BASIC is just as much object-oriented as VB, perhaps more so. True BASIC included modules, which are collections of subroutines and data; they provide the single most important feature of OOP, namely data encapsulation.” Modules are great, but OO? What about instantiation?

Some of his anecdotes about the constraints implementing the original Dartmouth BASIC were interesting, though: “The language was deliberately made simple for the first go-round so that a single-pass parsing was possible. It other words, variable names are very limited. A letter or a letter followed by a digit, and array names, one- and two-dimensional arrays were always single letters followed by a left parenthesis. The parsing was trivial. There was no table lookup and furthermore, what we did was to adopt a simple strategy that a single letter followed by a digit, gives you what, 26 times 11 variable names. We preallocated space, fixed space for the locations for the values of those variables, if and when they had values.”

AWK, Al Aho, Brian Kernighan, and Peter Weinberger

I guess AWK was popular a little before my (scripting) time, but it’s definitely a language with a neat little philosophy: make text processing simple and concise. The three creators are honest about some of the design trade-offs they made early on that might not have been the best. For example, there was tension between keeping AWK a text processing language, and adding more and more general-purpose programming features.

Apparently Aho didn’t write the most readable code, saying, “Brian Kernighan once took a look at the pattern-matching module that I had written and his only addition to that module was putting a comment in ancient Italian: ‘abandon all hope, ye who enter here’. As a consequence … I was the one that always had to make the bug fixes to that module.”

Another interesting Aho quote, this time about hardware: “Software does become more useful as hardware improves, but it also becomes more complex — I don’t know which side is winning.”

This Kernighan comment on bloated software designs echoes what Chuck Moore said about OSs: “Modern operating systems certainly have this problem; it seems to take longer and longer for my machines to boot, even though, thanks to Moore’s Law, they are noticeably faster than the previous ones. All that software is slowing me down.”

I agree with Weinberger that text files are underrated: “Text files are a big win. It requires no special tools to look at them, and all those Unix commands are there to help. If that’s not enough, it’s easy to transform them and load them into some other program. They are a universal type of input to all sorts of software. Further, they are independent of CPU byte order.”

There’s a lot more these three say about computer science education (being educators themselves), programming vs mathematics, and the like. But you’ll have to read the book.

Lua, Roberto Ierosalimschy and Luiz Henrique de Figueiredo

Lua fascinates me: a modern, garbage-collected and dynamically typed scripting languages that fits in about 200KB. Not to mention the minimalist design, with “tables” being Lua’s only container data type. Oh, and the interview was very good too. :-)

As an embedded programmer, I was fascinated by a comment of Roberto’s — he mentions Lua’s use of C doubles as the single numeric type in Lua, but “even using double is not a reasonable choice for embedded systems, so we can compiler the interpreter with an alternative numerical type, such as long.”

Speaking of concurrency, he notes that in the HOPL paper about the evolution of Lua they wrote, “We still think that no one can write correct programs in a language where a=a+1 is not deterministic.” I’ve been bitten by multi-threading woes several times, and that’s a great way to put!

They note they “made many small [mistakes] along the way.” But in contrast to Make’s hard tab issue, “we had the chance to correct them as Lua evolved. Of course this annoyed some users, because of the incompatibilities between versions, but now Lua is quite stable.”

Roberto also had a fairly extreme, but very thought provoking quote on comments: “I usually consider that if something needs comments, it is not well written. For me, a comment is almost a note like ‘I should try to rewrite this code later.’ I think clear code is much more readable than commented code.”

I really like these guys’ “keep it as simple as possible, but no simpler” philosophy. Most languages (C++, Java, C#, Python) just keep on adding features and features. But to Lua they’ve now “added if not all, most of the features we wanted.” Reminds me of Knuth’s TeX only allowing bugfixes now, and its version number converging to pi — there’s a point at which the feature set just needs to be frozen.

Haskell, Simon Peyton Jones, John Hughes, and Paul Hudak

I’ve heard a lot about Haskell, of course, but mainly I’ve thought, “this is trendy, it must be a waste of time.” And maybe there’s truth to that, but this interview really made me want to learn it. John’s comments about file I/O made me wonder how one does I/O in a purely functional language…

It’s a fascinating language, and if Wikipedia is anything to go by, it’s influenced a boatload of other languages and language features. List comprehensions (and their lazy equivalent, generator expressions), which are one of my favourite features of Python, were not exactly invented by Haskell, but were certainly popularized by it.

There’s a lot more in this interview (on formalism in language specification, education, etc), but again, I’m afraid you’ll have to read the book.

ML, Robin Milner

Sorry ML, I know you came first in history, but Haskell came before you in this book, so you were much less interesting. Seriously though, although ML looks like an interesting language, this chapter didn’t grab me too much. There was a lot of discussion on formalism, models, and theoretical stuff (which aren’t really my cup of tea).

What was interesting (and maybe this is why all the formalism) is that ML was designed “for theorem proving. It turned out that theorem proving was such a demanding sort of task that [ML] became a general-purpose language.”

SQL, Don Chamberlin

One often forgets how old SQL is: almost 40 years now. But still incredibly useful, and — despite the NoSQL people — used by most large-scale websites as well as much desktop and enterprise software. So the interview’s discussion of the history of SQL was a good read.

One of the interesting things was they wanted SQL to be used by users, not just developers. “Computer, query this and that table for X, Y, and Z please.” That didn’t quite work out, of course, and SQL is really only used by developers (and with ORMs and suchlike, much of that is not hand-coded). But it was a laudable goal.

The other interesting point was the reasons they wanted SQL to be declarative, rather than procedural. One of the main reasons was optimizability: “If the user tells the system in detailed steps what algorithm to use to process a query, the the optimizer has no flexibility to make changes, like choosing an alternative access path or choosing a better join order. A declarative language is much more optimizer-friendly than a lower-level procedural language.” Many of the other database query languages of the day were more procedural, and nobody’s heard of them today.

Objective-C, Tom Love and Brad Cox

When I started writing’s iPad app, I of course had to learn Objective-C. At first (like most developers) I was put off by [all [the [square brackets]]] and the longNamesThatTryToDocumentThemselves. But after you get into it, you realize that’s just syntax and style, and the core of Objective-C is actually quite elegant — adding Smalltalk-style OO to C in a low-impact way.

It’s been popularized by Apple for Mac and iOS development, of course, and also been expanded heavily by them, but it really hasn’t strayed from its roots. As Tom Love said, it’s “still Objective-C through and through. It stays alive.”

Tom gives some reasoning behind the ugly syntax: “The square brackets are an indication of a message sent in Objective-C. The original idea was that once you built up a set of libraries of classes, then you’re going to spend most of your time actually operating inside the square brackets … It was a deliberate decision to design a language that essentially had two levels — once you had built up enough capability, you could operate at the higher level … Had we chosen a very C-like syntax, I’m not sure anybody would know the name of the language anymore and it wouldn’t likely still be in use anywhere.”

Tom Love has gone on to be involved with some huge systems and codebases (millions of lines of code), and shares some experience and war stories about those. One of the more off-the-wall ideas he mentions to help would-be project managers get experience is to have a “project simulator” (like a flight simulator): “There is a problem of being able to live long enough to do 100 projects, but if you could simulate some of the decisions and experiences so that you could build your resume based on simulated projects as contrasted to real projects, that would also be another way to solve the problem.”

When asked, “Why emulate Smalltalk?”, Brad Cox says that “it hit me as an epiphany over all of 15 minutes. Like a load of bricks. What had annoyed me so much about trying to build large projects in C was no encapsulation anywhere…”

Comparing Objective-C to C++, he pulls out an integrated circuit metaphor, “Bjarne [C++] was targeting an ambitious language: a complex software fabrication line with an emphasis on gate-level fabrication. I was targeting something much simpler: a software soldering iron capable of assembling software ICs fabricated in plain C.”

One extreme idea (to me) that Brad mentions is in his discussion of why Objective-C forbids multiple inheritance. “The historical reason is that Objective-C was a direct descendant of Smalltalk, which doesn’t support inheritance, either. If I revisited that decision today, I might even go so far as to remove single inheritance as well. Inheritance just isn’t all that important. Encapsulation is OOP’s lasting contribution.”

Unfortunately for me, the rest of the interview was fairly boring, as Brad is interested in all the things I’m not — putting together large business systems with SOA, JBI, SCA, and other TLAs. I’m sure there are real problems those things are trying to solve, but the higher and higher levels of abstraction just put me to sleep.

Java, James Gosling

Like the C++ interview, the Java interview was a lot more interesting than the language is.

I know that in theory JIT compilers can do a better job than more static compilers: “When HotSpot runs, it knows exactly what chipset you’re running on. It knows exactly how the cache works. It knows exactly how the memory hierarchy works. It knows exactly how all the pipeline interlocks work in the CPU … It optimizes for precisely what machine you’re running on. Then the other half of it is that it actually sees the application as it’s running. It’s able to have statistics that know which things are important. It’s able to inline things that a C compiler could never do.” Those are cool concepts, but I was left wondering: how well do they actually work in practice? For what cases does well-written Java actually run faster than well-written C? (One might choose Java for many other reasons than performance, of course.)

James obviously has a few hard feelings towards C#: “C# basically took everything, although they oddly decided to take away the security and reliability stuff by adding all these sort of unsafe pointers, which strikes me as grotesquely stupid.”

And, interestingly, he has almost opposite views on documentation to Roberto Ierosalimschy from Lua: “The more, the better.” That’s a bit of a stretch — small is beautiful, and there’s a reason people like the conciseness of K&R.

C#, Anders Hejlsberg

Gosling may be right that C# is very similar to (and something of a copy of) Java, but it’s also a much cleaner language in many ways. Different enough to be a separate language? I don’t know, but now C# and Java have diverged enough to consider them quite separately. Besides, all languages are influenced by existing languages to a lesser or greater extent, so why fuss?

In any case, Anders was the guy behind the Turbo Pascal compiler, which was a really fast IDE and compiler back in the 1980’s. That alone makes him worth listening to, in my opinion.

What he said about the design of LINQ with regard to C# language features was thought-provoking: “If you break down the work we did with LINQ, it’s actually about six or seven language features like extension methods and lambdas and type inference and so forth. You can then put them together and create a new kind of API. In particular, you can create these query engines implemented as APIs if you will, but the language features themselves are quite useful for all sorts of other things. People are using extension methods for all sorts of other interesting stuff. Local variable type inference is a very nice feature to have, and so forth.”

Surprisingly (with Visual Studio at his fingertips), Anders’ approach to debugging was surprisingly similar to Guido van Rossum’s: “My primary debugging tool is Console.Writeline. To be honest I think that’s true of a lot of programmers. For the more complicated cases, I’ll use a debugger … But quite often you can quickly get to the bottom of it just with some simple little probes.”

UML, Ivar Jacobson, Grady Booch, and James Rumbaugh

As I mentioned, the UML interview was too big, but a good portion of it was the creators talking about how UML itself had grown too big. Not just one or two of them — all three of them said this. :-)

I’m still not quite sure what exactly UML is: a visual programming language, a specified way of diagramming different aspects of a system, or something else? This book is about programming languages, after all — so how do you write a “Hello, World” program in UML? Ah, like this, that makes me very enthusiastic…

Seriously, though, I think their critique of UML as something that had been taken over by design-by-committee made a lot of sense. A couple of them referred to something they called “Essential UML”, which is the 20% of UML that’s actually useful for developers.

Ivar notes how trendy buzzwords can make old ideas seem revolutionary: “The ‘agile’ movement has reminded us that people matter first and foremost when developing software. This is not really new … in bringing these things back to focus, much is lost or obscured by new terms for old things, creating the illusion of something completely new.” In the same vein, he says that “the software industry is the most fashion-conscious industry I know of”. Too true.

Grady Booch gave some good advice about reading code: “A question I often ask academics is, ‘How many of you have reading courses in software?’ I’ve had two people that have said yes. If you’re an English Lit major, you read the works of the masters. If you want to be an architect in the civil space, then you look at Vitruvius and Frank Lloyd Wright … We don’t do this in software. We don’t look at the works of the masters.” This actually made me go looking at the Lua source code, which is very tidy and wonderfully cross-referenced — really a good project to learn from.

Perl, Larry Wall

I’ve never liked the look of Perl, but Wall’s approach to language design is as fascinating as he is. He originally studied linguistics in order to be a missionary with Wycliffe Bible Translators and translate the Bible into unwritten languages, but for health reasons had to pull out of that. Instead, he used his linguistics background to shape his programming language. Some of the “fundamental principles of human language” that have “had a profound influence on the design of Perl over the years” are:

There are many others he lists, but those are some that piqued my interest. Larry’s a big fan of the human element in computer languages, noting that “many language designers tend to assume that computer programming is an activity more akin to an axiomatic mathematical proof than to a best-effort attempt at cross-cultural communication.”

He discusses at length some of the warts in previous versions of Perl, that they’re trying to remedy with Perl version 6. One of the interesting ones was the (not so) regular expression syntax: “When Unix culture first invented their regular-expression syntax, there were just a very few metacharacters, so they were easy to remember. As people added more and more features to their pattern matches, they either used up more ASCII symbols as metacharacters, or they used longer sequences that had previously been illegal, in order to preserve backward compatibility. Not surprisingly, the result was a mess … In Perl 6, as we were refactoring the syntax of pattern matching we realized that the majority of the ASCII symbols were already metacharacters anyway, so we reserved all of the nonalphanumerics as metacharacters to simplify the cognitive load on the programmer. There’s no longer a list of metacharacters, and the syntax is much, much cleaner.”

Larry’s not a pushy fellow. His subtle humour and humility are evident throughout the interview. For example, “It has been a rare privelege in the Perl space to actually have a successful experiment called Perl 5 that would allow us try a different experiment that is called Perl 6.” And on management, “I figured out in the early stage of Perl 5 that I needed to learn to delegate. The big problem with that, alas, is that I haven’t a management bone in my body. I don’t know how to delegate, so I even delegated the delegating, which seems to have worked out quite well.”

PostScript, Charles Geschke and John Warnock

My experience with Forth makes me very interested in PostScript, even though it’s a domain-specific printer control language, and wasn’t directly inspired by Forth. It’s stack-based and RPN, like Forth, but it’s also dynamically typed, has more powerful built-in data structures than Forth, and is garbage collected.

One thing I wasn’t fully aware of is how closely related PostScript is to PDF. PDF is basically a “static data structure” version of PostScript — all the Turing-complete stuff like control flow and logic is removed, but the fonts, layout and measurements are done exactly the same way as PostScript.

Some of the constraints they relate about implementing PostScript back in the early days are fascinating. The original LaserWriter had the “largest amount of software ever codified in a ROM” — half a megabyte. “Basically we put in the mechanism to allow us to patch around bugs, because if you had tens of thousands or hundreds of thousands of printers out there, you couldn’t afford to send out a new set of ROMs every month.” One of the methods they used for the patching was PostScript’s late binding, and its ability to redefine any operators, even things like the “add” instruction.

John Warnock mentions that “the little-known fact that Adobe has never communicated to anybody is that every one of our applications has fundamental interfaces into JavaScript. You can script InDesign. You can script Photoshop. You can script Illustrator with JavaScript. I write JavaScript programs to drive Photoshop all the time. As I say, it’s a very little-known fact, but the scripting interfaces are very complete. They give you real access, in the case of InDesign, into the object model if anybody ever wants to go there.” I wonder why they don’t advertise this scriptability more, or is he kind of being humble here?

I didn’t realize till half way through that the interviewees (creators of PostScript) were the co-founders of Adobe and are still the co-chairmen of the company. The fact that their ability extends both to technical and business … well, I guess there are quite a few software company CEOs that started as programmers, but I admire that.

Weird trivia: In May 1992, Charles Geschke was approached by two men who kidnapped him at gunpoint. The long (and fascinating) story was told five years later when the Geschkes were ready to talk about it. Read it in four parts here: part one, part two, part three, and part four.

Eiffel, Bertrand Meyer

Eiffel, you’re last and not least. Eiffel is quite different from Java or C#, though it influenced features in those languages. It incorporates several features that most developers (read: I) hadn’t heard of.

Design by Contract™, which Microsoft calls Code Contracts, is a big part of Eiffel. I’m sure I’m oversimplifying, but to me they look like a cross between regular asserts and unit tests, but included at the language level (with all the benefits that brings). Bertrand Meyer can’t understand how people can live without it: “I just do not see how anyone can write two lines of code without this. Asking why one uses Design by Contract is like asking people to justify Arabic numerals. It’s those using Roman numerals for multiplication who should justify themselves.”

He does have a few other slightly extreme ideas, though. Here’s his thoughts on C: “C is a reasonably good language for compilers to generate, but the idea that human beings should program in it is completely absurd.” Hmmm … I suspect I’d have a hard time using Eiffel on an MSP430 micro with 128 bytes of RAM.

It appears that Eiffel has a whole ecosystem, a neat-looking IDE, company, and way of life built around it. One of the few languages I know of that’s also a successful company in its own right. It’d be like if ActiveState was called “PythonSoft” and run by Guido van Rossum.

The end

That’s all folks. I know this falls in the category of “sorry about the long letter, I didn’t have time to write a short one”. If you’ve gotten this far, congratulations! Please send me an email and I’ll let you join my fan club as member 001.

Seriously though, if you have any comments, the box is right there below.

C#’s async/await compared to protothreads in C++

12 November 2012, by Ben    5 comments

For different parts of my job, I get to work in both high level and very low level software development — developing a Windows 8 app in C# on the one hand, and writing embedded C++ code for an microcontroller with 4KB of RAM on the other.

In our embedded codebase we’ve been using our C++ version of Adam Dunkels’ protothreads, and recently I noticed how similar protothreads are to C#’s new await and async keywords. Both make asynchronous code look like “normal” imperative code with a linear flow, and both unroll to an actual state machine under the covers.

There’s a great answer on StackOverflow showing how the C# compiler does this (and you can read more in-depth here). The example on StackOverflow shows that this C# code:

async Task Demo() { 
  var v1 = foo();
  var v2 = await bar();
  more(v1, v2);

Is compiled down to something like this:

class _Demo {
  int _v1, _v2;
  int _state = 0; 
  Task<int> _await1;
  public void Step() {
    switch(this._state) {
    case 0: 
      this._v1 = foo();
      this._await1 = bar();
      // When the async operation completes, it will call this method
      this._state = 1;
    case 1:
      this._v2 = this._await1.Result; // Get the result of the operation
      more(this._v1, this._v2);

C++ protothreads unroll to state machines in a similar way. For a (slightly more involved) example, see the protothread vs the state machine examples at my original blog entry.

C#’s async/await and protothreads are especially similar when using protothreads in C++, as both convert local variables to member variables so that they’re around next time the protothread is executed. In C#, of course, await is available at the language level, and as a result this is done automagically by the compiler. In C++, PT_WAIT is a macro whose implementation even Duff himself probably wouldn’t care for. And of course, C++ protothreads don’t use continuations.

But they both work very well for their intended use cases! In any case, I thought this similarity was pretty neat — to me C#’s approach with async/await validates the protothread concept, despite the latter being implemented at a much lower level and with hairy macros.

So if you’re doing low-level embedded development, do check out protothreads — either the straight C version or the C++ equivalent.

Save a Gmail message to hard disk

5 September 2012, by Berwyn    5 comments

Often you want to save a formatted Gmail message to your hard disk.  Maybe it’s a license key or some design documentation that you want to save in version control.  You can print the whole conversation as a pdf, but not just one message.  And .html would be better than pdf.  Well, here’s how.

As you know, you can already save a plain-text message by selecting “show original” from the drop-down menu in the top-right of the email.  But it can look really ugly if it is an html message displayed in plain-text format.

And here’s the secret.  Once you get your “show original” window up, your address bar will contain something like this:

Just change the bit that says “view=om&th” to “view=lg&msg”.  Push <Enter>, and Bingo: it’s no longer plain text.  Now save it as .html or as pdf.

Alternatively, below is a bookmarklet that achieves the same thing. To use it, just drag the link below to your bookmarks bar. Then when you want to see the message as HTML, view the original, then click your “Gmail Original as HTML” bookmark button:

Gmail Original as HTML

Sometimes it’s the little things …

µA/MHz is only the beginning

1 November 2011, by Berwyn    add a comment

Low Energy MicrocontrollerPicking a low-power microcontroller is way more than just selecting the lowest-current device.  Low µA/MHz may be the least of your power worries in a mobile device.  Every man and his dog are now selling microcontrollers containing an ARM core.  And many are very much as good as ARM can make them at around 150µA/Mhz.  But the packaging around that core makes all the difference.  Let’s look inside an Energy Micro FM32 microcontroller to see what it takes for their EFM32TG110 to be ahead of the pack.  The keys are sleep mode and sensor power.

On a modern design you may have to sense a capacitive-touch keypad, sample ambient light level, listen to a serial port, monitor wireless communications and a swipe card.  If you have to wake up your CPU 1000 times a second to check all these sensors, your battery life is shot.  It’s the difference between a one-year non-rechargeable battery life and a 10-year life.  And that means the difference between having a user-replaceable battery or a never-replace battery – which comes down to a quality user experience.  And Steve Jobs has shown us that user experience is what sells products.

The answer is ultimate sleep and low energy peripherals.

There’s sleep and there’s SLEEP

To save power, any mobile device must spend most of its time in sleep mode – by long-established tradition.  So to start with, look for the current used in sleep mode. Our example EFM32 device has 5 different low power modes each enabling various functionality.  Its lowest-power nearly-dead mode draws only 20 nA (yes, really nano-amps), the real-time-clock as low as 0.5 µA on some parts, and highly functional modes only 1 µA.

The key thing to look for in sleep mode is fast start-up.  If sleep mode takes more than a couple of microseconds to start up, you’re wasting power that whole time.  Ideally, you want your software to sleep indefinitely every single iteration of the main loop.  It should only be woken up by I/O.  So your I/O all needs to be interrupt driven, and your processor needs to keep time without waking up.  Only time or I/O events should wake up the processor: don’t wake up every millisecond.  This has software implications: check out lightweight Protothreads or the Contiki OS.

The final key: low energy peripherals

Sleep mode is good, but by itself it is not enough of a critical power saver.  10-Year single-cell battery life, remember?  The real “killer feature” is low-energy peripherals.  Our EFM32 example boasts low-energy peripherals (LESense) that can run in the background while your processor stays in sleep mode.  While in sleep mode it can sense a capacitive slider or buttons, send UART serial data to/from memory, check light sensors, voltage levels, scale load cells, and so on.  This all happens while the processor is in sleep mode drawing just 1 µA.  No need to wake up to a hungry 150 µA/MHz.

There’s one more thing to look for: why not power your sensors only briefly? Turn them off in between measurements? For example, a capacitive touch button only needs to sense, say, 20 times a second.  Or to measure a weight threshold on a load cell would take an op-amp and comparator – but these only need to be powered up briefly several times a second.  Similarly, a voltage divider only needs to be connected up during comparator measurement.

Our the EFM32 microcontroller achieves this by having an array of analog comparators and op-amps that are powered up for a few microseconds every time the sensor is sampled.  It’s only when a finger is sensed on a button or a voltage level drops below a certain point, that the microcontroller needs to be woken up to decide what to do with the event.

We’ve been using the EFM32 as an example.  There may be others “out there” that do the same kind of thing.  But we haven’t found any yet.  In our minds, this places Energy Micro in a league of its own.  If you know of similar low-energy sensor technology in ARM microcontrollers, we’d love hear your comments.

Also see Low-Energy Mobile, the Nov issue of our newsletter or our other technical articles.

Single-port USB3 docking

1 November 2011, by Berwyn    add a comment

This made-for-mobile docking is a royalty-free idea for any laptop manufacturer to take up.  All I ask is that you give me one  :-).  Docking stations are bulky.  Why not make the laptop power supply be a single USB connector into the laptop.  It can supply power over this (special) USB connector’s power wires, and the power supply it can also be a USB3 hub for the room’s peripherals: second monitor and printer, etc.  When you’re in the office, just plug in power and you’re docked.  Plus, when you’re out of office, the same laptop port doubles as a spare USB3 port.

Endian solution for C

1 November 2011, by Berwyn    2 comments

Big or little end?

Are numbers stored big-end or little-end first?  C programmers have to byte-swap manually to deal with this.  Portable code becomes icky.  The full ickiness is illustrated in Intel’s excellent Endianness White Paper.

But in C there is no reason the compiler can’t do the hard work and make programs both portable and pretty.  Here we present quick hack, and also a solution requiring a minor change to the C compiler.  First the quick hack.

Quick Hack

Suppose we’re dealing with a USB protocol setup packet:

struct setup {
  char  bmRequestType,  bRequest;
  short  wValue,  wIndex,  wLength;

Since USB is little-endian, the short integers will work on a an X86 machine, but if you have the job of porting to a big-endian ARM, you’ll need to byte-swap each of these values every time they’re accessed.  This could be a lot of code rework.

One quick-n-dirty way to accomplish this is to simply store the USB packet in reverse order as it comes in your door (or write a reversal function).  Then define your struct in reverse order:

struct setup {
  short  wLength,  wIndex,  wValue;
  char  bRequest,  bmRequestType;

Note that this will only work if you don’t have strings in your struct, as C’s string library functions don’t expect strings to be in reverse order!

A Real Solution

The following solution has the C compiler dealing with the whole endian issue for you – making your program totally portable with zero effort.  It would require the addition of a single type modifier to the C compiler (C compilers already have various type modifiers).  To solve endian portability, this addition would be well worth it.

This concept is similar to the ‘signed’ and ‘unsigned’ access type modifiers or the GNU C compiler’s packed attribute which helps to access a protocol’s data by letting you prevent padding between structure elements.

Our example above would become:

struct setup {
  char  bmRequestType,  bRequest;
  little_endian  short  wValue, wIndex, wLength;

All that is needed is to be able to specify the endian nature in a type modifier: littleendian or bigendian.  In the above an X86 compiler would knows to ignore the modifier since it’s already little_endian.  But a big-endian ARM compiler would know to byte-swap for you upon reading or writing.

The same would work for pointers:

little_endian  long  *x,  *y;

Whenever x or y is accessed, the bytes are swapped by the compiler.  You can even cast a standard long pointer to a little_endian long to force a compiler byte-swap upon access.

Internally, the compiler would probably implement just a single byte-swap type modifier which it would apply to all non-native accesses.  But for portability and clarity, this should be spelled out as little_ or big_endian in the source.

It should also be noted that this same solution solves the endian problem for bit fields and bit masks (White Paper p12).

We are not C compiler gurus, so I’m not going to risk adding this change to my compiler.  But I put it “out there” for comment, and hopeful uptake.  I’m guessing this should go into GCC first, and then others will gradually follow suit.

Earthquake Update and How to use your Water Heater as a Reservoir

2 March 2011, by Bryan    4 comments

Status update and some thoughts

We hope this finds you well. Our thoughts and deepest sympathies are with all our fellow citizens of Christchurch and New Zealand who’ve been affected in some way by the recent major 6.3 earthquake in Christchurch.

Our families and close friends are unharmed, and we’ve sustained only minor damage to our buildings and property. Since our buildings are in the CBD, we’ll be working out of other offices temporarily. Our customers will be pleased to know that all our data, websites, and other services are backed up and fully operational from our Auckland-based servers.

We realize that many other Christchurch people & businesses have been affected in much more tragic ways. Without wanting to gloss over what is truly a sad situation, we believe the Christchurch situation is in God’s hands. We know that saying so doesn’t make it any less difficult for those who have lost friends, family members, and property. We want you know we’re praying for you.

Like everyone, we are doing what other things we can in order to help relieve burdens where possible.

How to use your water heater as a reservoir

We all know by now that lack of clean drinking water is one of the serious long-term hazards in an earthquake: the main water supply can be off or contaminated by the sewer. Here’s how to use your water cylinder as a reservoir of clean drinking water.

This will be of most use if your water is still off, because otherwise you’ll probably have some unclean water in your tank. The procedure may differ slightly for you, but the basics are here.

Click on the photos to the right to see more detail.

1. Turn off the feed immediately

Feed tap Turn off cylinder’s cold water feed as soon as you can after the quake. If you’re lucky, you’ve got a tap on the pipe coming out of the wall. Or if you have a header tank in your attic, the feed will be up there.

2. Find the flush outlet

Overflow drainpipe Find the “flush” or overflow drain outlet. In this case we had to unscrew bits of the plastic drain to reveal the end of the grey outlet pipe.

3. Pull up the air inlet valve and prop it up

Air inlet Before letting clean water out of the drain pipe, pull up the air inlet valve to let air into the top of the tank as water goes out the bottom. You need to find something to prop open the air inlet valve. If nothing is going into the top of the tank when you drain water out the bottom, then the water will slow and stop. (Otherwise it can apparently suck the liner off the inside of the tank.) If you’ve got a header tank in your attic that feeds your cylinder, then you won’t have an inlet valve and you don’t need to worry about this.

4. Find the flush knob

Flush knob Find the “flush” knob that leads to that drain pipe, and you should get hot water coming out. It’s clean! Use it sparingly: your neighbours might needs some, too. In this case, the flush knob happens to be marked “flush”. You might not be so lucky. Look for where you think the water might drain out. The flush mechanism is used by plumbers to drain your tank before they have to replace it.

5. Turn off the power

You might want to turn off your water heater’s electric switch so you can get some cold water out. It’s easier to heat water than to cool it.

6. Catch rainwater from your roof

Downpipe modification Simply find a way to detach the downpipe from where it enters the drain, and place a large container under it. Some down-pipes (like the one shown) can be disconnected without even a screwdriver just by popping the pipe out of the spout.

If you’re smart, you’ll probably find a way to channel your roof water into your header tank, but there’s no need to get fancy here: just use any large container you have handy.


Turning off the water feed is important so that you don’t let water from the main supply into your cylinder. This is because the water supply is potentially contaminated by broken sewer lines after an earthquake. If you want your water heater to retain clean water then be sure to turn off the feed before you draw any hot water out of your hot water taps. Because drawing hot water out of the tap will suck potentially cold water into the water heater. Don’t worry if you’ve just run a hot water tap briefly by accident: you’re probably just sucking in a bit of clean cold water that was still in your pipes. Some tanks don’t have a feed tap. If that’s true for you, then your hot water will only stay clean if your mains water supply is off and not feeding your tank.

Do you have any useful earthquake tips of your own?

The Founding Fathers of the Silicon Valley

15 February 2011, by Latesha    one comment

Vacuum tubes -- what they had before transistors Two days before Christmas in 1947, the world was busily scurried around buying and wrapping gifts, cooking ham, and filling stockings; completely unaware that three men had just created something incredible. Something which would usher in the Information Age, change the face of technology, business, and the way we live our lives. In fact, the presents which today’s teenagers demand from Santa wouldn’t even be possible without this small but significant invention – the transistor.

Anyone with an interest in electronics knows just how important this device is; a semiconductor which amplifies and switches electronic signals, transistors are the key active component of many gadgets we consider essential in our modern day life. We can carry our slim, streamlined laptops, listen to MP3 players, and punch out sums on our portable calculators all thanks (largely) to three very intelligent men – John Bardeen, Walter Brattain, and William Shockley.

Their discovery and its development played an instrumental role in the history of technology. Let’s explore the unique contributions made by each member of the transistor ‘A Team’.

The people underneath the geniuses

John Bardeen was often said to be the brains of the operation. Born to well-off, intelligent parents in 1908, John was something of a prodigy; graduating high school at 15 and described by his mother as ‘the concentrated essence of the brain’. A defining point for John was the pursuit of his PHD in mathematical physics at Princeton University. This is where he explored the study of metals with scientists like Eugene Wigner and Frederick Seitz, and realised the importance of quantum mechanics theories in understanding how semiconductors worked. He couldn’t have possibly known at the time just how crucial this knowledge would become!

Walter Brattain has been described as the hands that complemented John’s brain. As a co-worker said, ‘He could put things together out of sealing wax and paper clips, if you wish, and make things work.’ More cowboy than scientist, Walter was brought up on a cattle ranch in Washington, and applied his practical skills to every problem presented. After gaining his PHD in physics, a chance meeting with Joseph Becker of Bell Labs landed him a position studying copper-oxide rectifiers. His insight into the generation of electrical currents in crystals and research on the surfaces of semiconductors cuprous oxide and silicon helped make him a valuable asset in the development of the transistor.

William Shockley was definitely the most controversial character of the threesome. His astounding achievements in advancing radar equipment and depth charges during World War II, position as advisor to the Secretary of War, and brilliant mind are in direct contrast to the violent, racist, and bitter characteristics he became known for later in life. William headed up a research team seeking a solid-state alternative to fragile glass vacuum tube amplifiers at Bell Labs, and brought together John and Walter, who he knew from his schooldays, to help uncover why the amplifier design he had devised didn’t work.

The discovery

John, Walter, and William worked feverishly to solve this puzzle. In the laboratory and at the golf course they talked metals, voltage, and the implications of electrical currents on surfaces. After switching from using a silicon crystal to germanium, and implementing their discoveries about how electrons behave at the surface of the metal, Bell Labs were convinced that they had cracked it, and hastened to patent the new designs.

The first point-contact transistor, (a portmanteau of the term “transfer resistor”) was born!

The three men were awarded the Nobel Physics Prize in 1956 for their key research into the transistor concept. Interestingly, by this point John Bardeen had left Bell Labs due to William Shockley’s disagreeable, competitive nature and the consequent disintegration in their working relationship.


The arrival of the transistor was a catalyst for the Silicon Valley boom – William Shockley left Bell Labs holding a grudge that his name hadn’t appeared on the patent applications to start his own company, Shockley Superconductor. This fell apart due to his suspicious and dictator-like managerial style, and defecting employees created spin-off rival companies such as Fairchild Semiconductor, Intel, and National Semiconductor; issuing in the new generation of Silicon Valley start-ups and the microelectronics revolution.

Fast-forward 63 years, and there are more transistors built per second than there are people on the planet; billions of these devices are what power our life in the 20th century. As you may be aware, the most common form transistors are utilised in today is through integrated circuits (or commonly, ‘chips’.) The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the previous manual assembly of circuits using electronic components. The implications of this in the development of technology is huge – as the power of IC’s increase and their size decreases, less materials are required to accommodate them.

This means our laptops get smaller, our systems become more streamlined, and the way we approach hardware and computing radically shifts.

Other advantages which distinguish the transistor from its predecessor, the vacuum tube, are:

It’s hard to believe that the transistor was only introduced the world of technology in 1947. Those enjoying the many benefits of the device all throughout the day; listening to the car radio on the way to work, checking emails on their iPhone, or jogging to tunes on an MP3 player certainly wouldn’t think to give credit to John Bardeen, Walter Brattain, and William Shockley. But these three founding fathers definitely made a big impact – and as John said, ‘Science is a field which grows continuously with ever expanding frontiers.’

What do you think is the next frontier for technology?

— Written by guest writer Latesha Randall

Should you use C++ for an embedded project?

20 January 2011, by Ben    5 comments

Recently I was asked whether to use C or C++ for an embedded project. Let’s assume a sizeable project with an ARM7-based microcontroller and roughly 16KB of RAM. Here is my answer, and I hope it’s useful for you.

One good reason to go with C++ might be that you have a ready team of programmers who are already fluent in it. This would be a strong pull, but if they are not fluent in the embedded world particularly, then this article may help them choose a subset of C++ suitable for an embedded environment.

As the guy said, C makes it easy for you to shoot yourself in the foot, but C++, being much more complex and powerful, allows you to blow your whole leg off. This is true especially in embedded situations.

If you’ve only used C for embedded projects, you’ll be able to get started and into the project much more quickly if you stick with C, and other people could take over more easily. You won’t spend time learning the ins and outs of C++.

C has the advantage of being very direct and “what you see is what you get”. For example, in C this line of code adds two numbers together:

int x = y + z;

But in C++ a very similar-looking line:

MyType x = y + z;

This calls a constructor (which could do anything), calls the + operator of y’s class which (if overloaded) could do anything, possibly throws an exception, possibly allocates memory, etc. You just don’t know, unless your coding style guide lays down the law about what you can and can’t do (more on that soon).

On the other hand, there are definitely some good features in C++’s favour that make programming better, especially for larger projects. Classes, namespaces, templates, scoping … but the language has a ton of powerful features as well as quirks, so when you’re embedding C++ you have to choose a subset of the language and stick with it.

For instance, for embedding C++ in an ARM7-sized processor, you might come up with a list something like:

If you haven’t used C++ before, you’ll definitely want to learn it before getting too far in. Following some links in my C++ blog articles may be helpful here:

That said, you can’t learn it just by reading about it, so you need to start coding.

In conclusion, for a small embedded job I’d pick C first for its simplicity, directness, and the fact more people know it. For a larger project, I’d pick C++ — it can make embedded life nicer, but only if you carefully choose a subset and have a good coding standard.

Further reading: