Award Finalist

Award FinalistCool. We’re finalists for the second year in a row. Our satellite-based bee monitor is an agri-tech innovation for beekeepers called Hivemind.

14 June 2016 by Berwyn    add a comment


New IoT Technologies

Python WiFi moduleI love new technologies that might benefit people’s products. We’ve supported development on these three nifty platforms:

Other reasonably-priced technologies that we’re keeping our eye on are:

 

14 June 2016 by Berwyn    add a comment


IoT for business

Bee RFID

“If you can’t measure it, you can’t improve it” says Peter Drucker’s adage. IoT for business is not just cool tech. It’s measuring what you need to know: whether it’s agriculture, bio sensing, or supply chain. “Internet of Things” (IoT) is a fairly new buzzword, but as a company it’s nothing new: we’ve been working in this field for 10 years.

Does your business need to measure asset location, bee hive health, customer location within your store, the health of your patient, or keep track of your factory process? You may even need us to make an app to present that data helpfully. Let us know.

In summary, measuring things is becoming a whole lot easier. Think through your business. The chances are high that you can benefit from IoT for business.

14 June 2016 by Berwyn    add a comment


Yes, my credit card number *does* have spaces!

Credit CardOkay, so here’s a little rant (by a fellow developer) about web forms that accept credit card numbers. Or rather, web forms that don’t accept them very well.

Many card number input boxes are limited to 16 characters, meaning when I get up to “1234 5678 1234 5” and then try to type the last three digits … BANG, input box full, no more typie. I have to go back and delete the spaces, making it harder to read and check. I had this just today when paying for my Highrise account (though I’m not really picking on 37signals — it happens on many payment sites).

The other one that I get fairly often is when a site lets me enter the number with spaces, but then the server-side check comes back and says “invalid credit card number”. Um, no it’s not invalid — I’m typing it exactly as it appears on my card.

C’mon, folks, we’re programmers! Stripping out non-digits is the kind of thing computers can do in nano- or micro-seconds. What exactly is so hard about adding one line of code?

    card_number = ''.join(c for c in card_number if c.isdigit())

If you’re taking money that I want to give you, please at least let me enter my card number as it appears on my card — digit groups separated by spaces. Stripping really isn’t that hard! :-)

12 July 2013 by Ben    28 comments


Masterminds of Programming

Masterminds of Programming book cover

My wife gave me a real geek book for Christmas: Masterminds of Programming by two guys named Federico Biancuzzi and Shane Warden. In it they interview the creators of 17 well-known or historically important programming languages.

Overview

The book was a very good read, partly because not all the questions were about the languages themselves. The interviewers seemed very knowledgeable, and were able to spring-board from discussing the details of a language to talking about other software concepts that were important to its creator. Like software engineering practices, computer science education, software bloat, debugging, etc. The languages that everyone’s heard of and used are of course in there: C++, Java, C#, Python, Objective-C, Perl, and BASIC. There are a few missing — for example, the Japanese creator of Ruby didn’t feel comfortable being interviewed in English, and the publishers considered translation too expensive.

But what I really liked were interviews about some of the domain-specific languages, such as SQL, AWK, and PostScript. As well as some of the languages that were further off the beaten track, like APL, Haskell, ML, Eiffel, Lua, and Forth. The one thing I didn’t go for was the 60 pages with the UML folks. That could have been cut, or at least condensed — half of it (somewhat ironically) was them talking about how UML had gotten too big.

If you’re a programmer, definitely go and buy the book (the authors paid me 0x0000 to say that). But in the meantime, below are a few more specific notes and quotes from the individual interviews.

This review got rather long. From here on, it’s less of a real review, and more my “quotes and notes” on the individual chapters. I hope you’ll find it interesting, but for best results, click to go to the languages you’re interested in: C++, Python, APL, Forth, BASIC, AWK, Lua, Haskell, ML, SQL, Objective-C, Java, C#, UML, Perl, PostScript, Eiffel.

C++, Bjarne Stroustrup

C++ might be one of the least exciting languages on the planet, but the interview wasn’t too bad.

I knew RAII was big in C++, and Stroustrup plugged it two or three times in this fairly short interview. Another thing I found interesting was his comment that “C++ is not and was never meant to be just an object-oriented programming language … the idea was and is to support multiple programming styles”. Stroustrup’s very big on generic programming with templates, and he badgered Java and C# for adding generics so late in their respective games.

He does note that “the successes at community building around C++ have been too few and too limited, given the size of the community … why hasn’t there been a central repository for C++ libraries since 1986 or so?” A very good thought for budding language designers today. A PyPI or a CPAN for C++ would have been a very good idea.

As usual, though, he sees C++ a little too much as the solution for everything (for example, “I have never seen a program that could be written better in C than in C++”). I think in the long run this works against him.

Python, Guido van Rossum

One thing Python’s creator talks about is how folks are always asking to add new features to the languages, but to avoid it becoming a huge hodge-podge, you’ve got to do an awful lot of pushing back. “Telling people you can already do that and here is how is a first line of defense,” he says, going on to describe stages two, three, and four before he considers a feature worth including into the core. In fact, this is something that came up many times in the book. To keep things sane and simple, you’ve got to stick to your vision, and say no a lot.

Relatedly, Guido notes, “If a user [rather than a Python developer] proposes a new feature, it is rarely a success, since without a thorough understanding of the implementation (and of language design and implementation in general) it is nearly impossible to properly propose a new feature. We like to ask users to explain their problems without having a specific solution in mind, and then the developers will propose solutions and discuss the merits of different alternatives with the users.”

After just reading Stroustrup’s fairly involved approach to testing, Guido’s approach seemed almost primitive — though much more in line with Python’s philosophy, I think: “When writing your basic pure algorithmic code, unit tests are usually great, but when writing code that is highly interactive or interfaces to legacy APIs, I often end up doing a lot of manual testing, assisted by command-line history in the shell or page-reload in the browser.” I know the feeling — when developing a web app, you usually don’t have the luxury of building full-fledged testing systems.

One piece of great advice is that early on when you only have a few users, fix things drastically as soon as you notice a problem. He relates an anecdote about Make: “Stuart Feldman, the original author of “Make” in Unix v7, was asked to change the dependence of the Makefile syntax on hard tab characters. His response was something along the lines that he agreed tab was a problem, but that it was too late to fix since there were already a dozen or so users.”

APL, Adin Falkoff

APL is almost certainly the strangest-looking real language you’ll come across. It uses lots of mathematical symbols instead of ASCII-based keywords, partly for conciseness, partly to make it more in line with maths usage. For example, here’s a one-liner implementation of the Game of Life in APL:

Conway's Game of Life in APL

Yes, Falkoff admits, it takes a while to get the hang of the notation. The weird thing is, this is in 1964, years before Unicode, and originally you had to program APL using a special keyboard.

Anyway, despite that, it’s a very interesting language in that it’s array-oriented. So when parallel computing and Single Instruction, Multiple Data came along, APL folks updated their compilers, and all existing APL programs were magically faster without any tweaking. Try that with C’s semantics.

Forth, Chuck Moore

Forth is a small and very nifty language that holds a special place in my heart. :-) It’s quirky and minimalistic, though, and so is it’s creator.

He’s an extremist, but also sometimes half right. For example, “Operating systems are dauntingly complex and totally unnecessary. It’s a brilliant thing Bill Gates has done in selling the world on the notion of operating systems. It’s probably the greatest con the world has ever seen.” And further on, “Compilers are probably the worst code ever written. They are written by someone who has never written a compiler before and will never do so again.”

Despite the extremism in the quotes above, there’s a lot folks could learn from Forth’s KISS approach, and a lot of good insight Moore has to share.

BASIC, Tom Kurtz

I felt the BASIC interview wasn’t the greatest. Sometimes it seemed Kurtz didn’t really know what he was talking about, for instance this paragraph, “I found Visual Basic relatively easy to use. I doubt that anyone outside of Microsoft would define VB as an object-oriented language. As a matter of fact, True BASIC is just as much object-oriented as VB, perhaps more so. True BASIC included modules, which are collections of subroutines and data; they provide the single most important feature of OOP, namely data encapsulation.” Modules are great, but OO? What about instantiation?

Some of his anecdotes about the constraints implementing the original Dartmouth BASIC were interesting, though: “The language was deliberately made simple for the first go-round so that a single-pass parsing was possible. It other words, variable names are very limited. A letter or a letter followed by a digit, and array names, one- and two-dimensional arrays were always single letters followed by a left parenthesis. The parsing was trivial. There was no table lookup and furthermore, what we did was to adopt a simple strategy that a single letter followed by a digit, gives you what, 26 times 11 variable names. We preallocated space, fixed space for the locations for the values of those variables, if and when they had values.”

AWK, Al Aho, Brian Kernighan, and Peter Weinberger

I guess AWK was popular a little before my (scripting) time, but it’s definitely a language with a neat little philosophy: make text processing simple and concise. The three creators are honest about some of the design trade-offs they made early on that might not have been the best. For example, there was tension between keeping AWK a text processing language, and adding more and more general-purpose programming features.

Apparently Aho didn’t write the most readable code, saying, “Brian Kernighan once took a look at the pattern-matching module that I had written and his only addition to that module was putting a comment in ancient Italian: ‘abandon all hope, ye who enter here’. As a consequence … I was the one that always had to make the bug fixes to that module.”

Another interesting Aho quote, this time about hardware: “Software does become more useful as hardware improves, but it also becomes more complex — I don’t know which side is winning.”

This Kernighan comment on bloated software designs echoes what Chuck Moore said about OSs: “Modern operating systems certainly have this problem; it seems to take longer and longer for my machines to boot, even though, thanks to Moore’s Law, they are noticeably faster than the previous ones. All that software is slowing me down.”

I agree with Weinberger that text files are underrated: “Text files are a big win. It requires no special tools to look at them, and all those Unix commands are there to help. If that’s not enough, it’s easy to transform them and load them into some other program. They are a universal type of input to all sorts of software. Further, they are independent of CPU byte order.”

There’s a lot more these three say about computer science education (being educators themselves), programming vs mathematics, and the like. But you’ll have to read the book.

Lua, Roberto Ierosalimschy and Luiz Henrique de Figueiredo

Lua fascinates me: a modern, garbage-collected and dynamically typed scripting languages that fits in about 200KB. Not to mention the minimalist design, with “tables” being Lua’s only container data type. Oh, and the interview was very good too. :-)

As an embedded programmer, I was fascinated by a comment of Roberto’s — he mentions Lua’s use of C doubles as the single numeric type in Lua, but “even using double is not a reasonable choice for embedded systems, so we can compiler the interpreter with an alternative numerical type, such as long.”

Speaking of concurrency, he notes that in the HOPL paper about the evolution of Lua they wrote, “We still think that no one can write correct programs in a language where a=a+1 is not deterministic.” I’ve been bitten by multi-threading woes several times, and that’s a great way to put!

They note they “made many small [mistakes] along the way.” But in contrast to Make’s hard tab issue, “we had the chance to correct them as Lua evolved. Of course this annoyed some users, because of the incompatibilities between versions, but now Lua is quite stable.”

Roberto also had a fairly extreme, but very thought provoking quote on comments: “I usually consider that if something needs comments, it is not well written. For me, a comment is almost a note like ‘I should try to rewrite this code later.’ I think clear code is much more readable than commented code.”

I really like these guys’ “keep it as simple as possible, but no simpler” philosophy. Most languages (C++, Java, C#, Python) just keep on adding features and features. But to Lua they’ve now “added if not all, most of the features we wanted.” Reminds me of Knuth’s TeX only allowing bugfixes now, and its version number converging to pi — there’s a point at which the feature set just needs to be frozen.

Haskell, Simon Peyton Jones, John Hughes, and Paul Hudak

I’ve heard a lot about Haskell, of course, but mainly I’ve thought, “this is trendy, it must be a waste of time.” And maybe there’s truth to that, but this interview really made me want to learn it. John’s comments about file I/O made me wonder how one does I/O in a purely functional language…

It’s a fascinating language, and if Wikipedia is anything to go by, it’s influenced a boatload of other languages and language features. List comprehensions (and their lazy equivalent, generator expressions), which are one of my favourite features of Python, were not exactly invented by Haskell, but were certainly popularized by it.

There’s a lot more in this interview (on formalism in language specification, education, etc), but again, I’m afraid you’ll have to read the book.

ML, Robin Milner

Sorry ML, I know you came first in history, but Haskell came before you in this book, so you were much less interesting. Seriously though, although ML looks like an interesting language, this chapter didn’t grab me too much. There was a lot of discussion on formalism, models, and theoretical stuff (which aren’t really my cup of tea).

What was interesting (and maybe this is why all the formalism) is that ML was designed “for theorem proving. It turned out that theorem proving was such a demanding sort of task that [ML] became a general-purpose language.”

SQL, Don Chamberlin

One often forgets how old SQL is: almost 40 years now. But still incredibly useful, and — despite the NoSQL people — used by most large-scale websites as well as much desktop and enterprise software. So the interview’s discussion of the history of SQL was a good read.

One of the interesting things was they wanted SQL to be used by users, not just developers. “Computer, query this and that table for X, Y, and Z please.” That didn’t quite work out, of course, and SQL is really only used by developers (and with ORMs and suchlike, much of that is not hand-coded). But it was a laudable goal.

The other interesting point was the reasons they wanted SQL to be declarative, rather than procedural. One of the main reasons was optimizability: “If the user tells the system in detailed steps what algorithm to use to process a query, the the optimizer has no flexibility to make changes, like choosing an alternative access path or choosing a better join order. A declarative language is much more optimizer-friendly than a lower-level procedural language.” Many of the other database query languages of the day were more procedural, and nobody’s heard of them today.

Objective-C, Tom Love and Brad Cox

When I started writing Oyster.com’s iPad app, I of course had to learn Objective-C. At first (like most developers) I was put off by [all [the [square brackets]]] and the longNamesThatTryToDocumentThemselves. But after you get into it, you realize that’s just syntax and style, and the core of Objective-C is actually quite elegant — adding Smalltalk-style OO to C in a low-impact way.

It’s been popularized by Apple for Mac and iOS development, of course, and also been expanded heavily by them, but it really hasn’t strayed from its roots. As Tom Love said, it’s “still Objective-C through and through. It stays alive.”

Tom gives some reasoning behind the ugly syntax: “The square brackets are an indication of a message sent in Objective-C. The original idea was that once you built up a set of libraries of classes, then you’re going to spend most of your time actually operating inside the square brackets … It was a deliberate decision to design a language that essentially had two levels — once you had built up enough capability, you could operate at the higher level … Had we chosen a very C-like syntax, I’m not sure anybody would know the name of the language anymore and it wouldn’t likely still be in use anywhere.”

Tom Love has gone on to be involved with some huge systems and codebases (millions of lines of code), and shares some experience and war stories about those. One of the more off-the-wall ideas he mentions to help would-be project managers get experience is to have a “project simulator” (like a flight simulator): “There is a problem of being able to live long enough to do 100 projects, but if you could simulate some of the decisions and experiences so that you could build your resume based on simulated projects as contrasted to real projects, that would also be another way to solve the problem.”

When asked, “Why emulate Smalltalk?”, Brad Cox says that “it hit me as an epiphany over all of 15 minutes. Like a load of bricks. What had annoyed me so much about trying to build large projects in C was no encapsulation anywhere…”

Comparing Objective-C to C++, he pulls out an integrated circuit metaphor, “Bjarne [C++] was targeting an ambitious language: a complex software fabrication line with an emphasis on gate-level fabrication. I was targeting something much simpler: a software soldering iron capable of assembling software ICs fabricated in plain C.”

One extreme idea (to me) that Brad mentions is in his discussion of why Objective-C forbids multiple inheritance. “The historical reason is that Objective-C was a direct descendant of Smalltalk, which doesn’t support inheritance, either. If I revisited that decision today, I might even go so far as to remove single inheritance as well. Inheritance just isn’t all that important. Encapsulation is OOP’s lasting contribution.”

Unfortunately for me, the rest of the interview was fairly boring, as Brad is interested in all the things I’m not — putting together large business systems with SOA, JBI, SCA, and other TLAs. I’m sure there are real problems those things are trying to solve, but the higher and higher levels of abstraction just put me to sleep.

Java, James Gosling

Like the C++ interview, the Java interview was a lot more interesting than the language is.

I know that in theory JIT compilers can do a better job than more static compilers: “When HotSpot runs, it knows exactly what chipset you’re running on. It knows exactly how the cache works. It knows exactly how the memory hierarchy works. It knows exactly how all the pipeline interlocks work in the CPU … It optimizes for precisely what machine you’re running on. Then the other half of it is that it actually sees the application as it’s running. It’s able to have statistics that know which things are important. It’s able to inline things that a C compiler could never do.” Those are cool concepts, but I was left wondering: how well do they actually work in practice? For what cases does well-written Java actually run faster than well-written C? (One might choose Java for many other reasons than performance, of course.)

James obviously has a few hard feelings towards C#: “C# basically took everything, although they oddly decided to take away the security and reliability stuff by adding all these sort of unsafe pointers, which strikes me as grotesquely stupid.”

And, interestingly, he has almost opposite views on documentation to Roberto Ierosalimschy from Lua: “The more, the better.” That’s a bit of a stretch — small is beautiful, and there’s a reason people like the conciseness of K&R.

C#, Anders Hejlsberg

Gosling may be right that C# is very similar to (and something of a copy of) Java, but it’s also a much cleaner language in many ways. Different enough to be a separate language? I don’t know, but now C# and Java have diverged enough to consider them quite separately. Besides, all languages are influenced by existing languages to a lesser or greater extent, so why fuss?

In any case, Anders was the guy behind the Turbo Pascal compiler, which was a really fast IDE and compiler back in the 1980’s. That alone makes him worth listening to, in my opinion.

What he said about the design of LINQ with regard to C# language features was thought-provoking: “If you break down the work we did with LINQ, it’s actually about six or seven language features like extension methods and lambdas and type inference and so forth. You can then put them together and create a new kind of API. In particular, you can create these query engines implemented as APIs if you will, but the language features themselves are quite useful for all sorts of other things. People are using extension methods for all sorts of other interesting stuff. Local variable type inference is a very nice feature to have, and so forth.”

Surprisingly (with Visual Studio at his fingertips), Anders’ approach to debugging was surprisingly similar to Guido van Rossum’s: “My primary debugging tool is Console.Writeline. To be honest I think that’s true of a lot of programmers. For the more complicated cases, I’ll use a debugger … But quite often you can quickly get to the bottom of it just with some simple little probes.”

UML, Ivar Jacobson, Grady Booch, and James Rumbaugh

As I mentioned, the UML interview was too big, but a good portion of it was the creators talking about how UML itself had grown too big. Not just one or two of them — all three of them said this. :-)

I’m still not quite sure what exactly UML is: a visual programming language, a specified way of diagramming different aspects of a system, or something else? This book is about programming languages, after all — so how do you write a “Hello, World” program in UML? Ah, like this, that makes me very enthusiastic…

Seriously, though, I think their critique of UML as something that had been taken over by design-by-committee made a lot of sense. A couple of them referred to something they called “Essential UML”, which is the 20% of UML that’s actually useful for developers.

Ivar notes how trendy buzzwords can make old ideas seem revolutionary: “The ‘agile’ movement has reminded us that people matter first and foremost when developing software. This is not really new … in bringing these things back to focus, much is lost or obscured by new terms for old things, creating the illusion of something completely new.” In the same vein, he says that “the software industry is the most fashion-conscious industry I know of”. Too true.

Grady Booch gave some good advice about reading code: “A question I often ask academics is, ‘How many of you have reading courses in software?’ I’ve had two people that have said yes. If you’re an English Lit major, you read the works of the masters. If you want to be an architect in the civil space, then you look at Vitruvius and Frank Lloyd Wright … We don’t do this in software. We don’t look at the works of the masters.” This actually made me go looking at the Lua source code, which is very tidy and wonderfully cross-referenced — really a good project to learn from.

Perl, Larry Wall

I’ve never liked the look of Perl, but Wall’s approach to language design is as fascinating as he is. He originally studied linguistics in order to be a missionary with Wycliffe Bible Translators and translate the Bible into unwritten languages, but for health reasons had to pull out of that. Instead, he used his linguistics background to shape his programming language. Some of the “fundamental principles of human language” that have “had a profound influence on the design of Perl over the years” are:

There are many others he lists, but those are some that piqued my interest. Larry’s a big fan of the human element in computer languages, noting that “many language designers tend to assume that computer programming is an activity more akin to an axiomatic mathematical proof than to a best-effort attempt at cross-cultural communication.”

He discusses at length some of the warts in previous versions of Perl, that they’re trying to remedy with Perl version 6. One of the interesting ones was the (not so) regular expression syntax: “When Unix culture first invented their regular-expression syntax, there were just a very few metacharacters, so they were easy to remember. As people added more and more features to their pattern matches, they either used up more ASCII symbols as metacharacters, or they used longer sequences that had previously been illegal, in order to preserve backward compatibility. Not surprisingly, the result was a mess … In Perl 6, as we were refactoring the syntax of pattern matching we realized that the majority of the ASCII symbols were already metacharacters anyway, so we reserved all of the nonalphanumerics as metacharacters to simplify the cognitive load on the programmer. There’s no longer a list of metacharacters, and the syntax is much, much cleaner.”

Larry’s not a pushy fellow. His subtle humour and humility are evident throughout the interview. For example, “It has been a rare privelege in the Perl space to actually have a successful experiment called Perl 5 that would allow us try a different experiment that is called Perl 6.” And on management, “I figured out in the early stage of Perl 5 that I needed to learn to delegate. The big problem with that, alas, is that I haven’t a management bone in my body. I don’t know how to delegate, so I even delegated the delegating, which seems to have worked out quite well.”

PostScript, Charles Geschke and John Warnock

My experience with Forth makes me very interested in PostScript, even though it’s a domain-specific printer control language, and wasn’t directly inspired by Forth. It’s stack-based and RPN, like Forth, but it’s also dynamically typed, has more powerful built-in data structures than Forth, and is garbage collected.

One thing I wasn’t fully aware of is how closely related PostScript is to PDF. PDF is basically a “static data structure” version of PostScript — all the Turing-complete stuff like control flow and logic is removed, but the fonts, layout and measurements are done exactly the same way as PostScript.

Some of the constraints they relate about implementing PostScript back in the early days are fascinating. The original LaserWriter had the “largest amount of software ever codified in a ROM” — half a megabyte. “Basically we put in the mechanism to allow us to patch around bugs, because if you had tens of thousands or hundreds of thousands of printers out there, you couldn’t afford to send out a new set of ROMs every month.” One of the methods they used for the patching was PostScript’s late binding, and its ability to redefine any operators, even things like the “add” instruction.

John Warnock mentions that “the little-known fact that Adobe has never communicated to anybody is that every one of our applications has fundamental interfaces into JavaScript. You can script InDesign. You can script Photoshop. You can script Illustrator with JavaScript. I write JavaScript programs to drive Photoshop all the time. As I say, it’s a very little-known fact, but the scripting interfaces are very complete. They give you real access, in the case of InDesign, into the object model if anybody ever wants to go there.” I wonder why they don’t advertise this scriptability more, or is he kind of being humble here?

I didn’t realize till half way through that the interviewees (creators of PostScript) were the co-founders of Adobe and are still the co-chairmen of the company. The fact that their ability extends both to technical and business … well, I guess there are quite a few software company CEOs that started as programmers, but I admire that.

Weird trivia: In May 1992, Charles Geschke was approached by two men who kidnapped him at gunpoint. The long (and fascinating) story was told five years later when the Geschkes were ready to talk about it. Read it in four parts here: part one, part two, part three, and part four.

Eiffel, Bertrand Meyer

Eiffel, you’re last and not least. Eiffel is quite different from Java or C#, though it influenced features in those languages. It incorporates several features that most developers (read: I) hadn’t heard of.

Design by Contract™, which Microsoft calls Code Contracts, is a big part of Eiffel. I’m sure I’m oversimplifying, but to me they look like a cross between regular asserts and unit tests, but included at the language level (with all the benefits that brings). Bertrand Meyer can’t understand how people can live without it: “I just do not see how anyone can write two lines of code without this. Asking why one uses Design by Contract is like asking people to justify Arabic numerals. It’s those using Roman numerals for multiplication who should justify themselves.”

He does have a few other slightly extreme ideas, though. Here’s his thoughts on C: “C is a reasonably good language for compilers to generate, but the idea that human beings should program in it is completely absurd.” Hmmm … I suspect I’d have a hard time using Eiffel on an MSP430 micro with 128 bytes of RAM.

It appears that Eiffel has a whole ecosystem, a neat-looking IDE, company, and way of life built around it. One of the few languages I know of that’s also a successful company in its own right. It’d be like if ActiveState was called “PythonSoft” and run by Guido van Rossum.

The end

That’s all folks. I know this falls in the category of “sorry about the long letter, I didn’t have time to write a short one”. If you’ve gotten this far, congratulations! Please send me an email and I’ll let you join my fan club as member 001.

Seriously though, if you have any comments, the box is right there below.

21 January 2013 by Ben    13 comments


C#’s async/await compared to protothreads in C++

For different parts of my job, I get to work in both high level and very low level software development — developing a Windows 8 app in C# on the one hand, and writing embedded C++ code for an microcontroller with 4KB of RAM on the other.

In our embedded codebase we’ve been using our C++ version of Adam Dunkels’ protothreads, and recently I noticed how similar protothreads are to C#’s new await and async keywords. Both make asynchronous code look like “normal” imperative code with a linear flow, and both unroll to an actual state machine under the covers.

There’s a great answer on StackOverflow showing how the C# compiler does this (and you can read more in-depth here). The example on StackOverflow shows that this C# code:

async Task Demo() { 
  var v1 = foo();
  var v2 = await bar();
  more(v1, v2);
}

Is compiled down to something like this:

class _Demo {
  int _v1, _v2;
  int _state = 0; 
  Task<int> _await1;
  public void Step() {
    switch(this._state) {
    case 0: 
      this._v1 = foo();
      this._await1 = bar();
      // When the async operation completes, it will call this method
      this._state = 1;
      op.SetContinuation(Step);
    case 1:
      this._v2 = this._await1.Result; // Get the result of the operation
      more(this._v1, this._v2);
  }
}

C++ protothreads unroll to state machines in a similar way. For a (slightly more involved) example, see the protothread vs the state machine examples at my original blog entry.

C#’s async/await and protothreads are especially similar when using protothreads in C++, as both convert local variables to member variables so that they’re around next time the protothread is executed. In C#, of course, await is available at the language level, and as a result this is done automagically by the compiler. In C++, PT_WAIT is a macro whose implementation even Duff himself probably wouldn’t care for. And of course, C++ protothreads don’t use continuations.

But they both work very well for their intended use cases! In any case, I thought this similarity was pretty neat — to me C#’s approach with async/await validates the protothread concept, despite the latter being implemented at a much lower level and with hairy macros.

So if you’re doing low-level embedded development, do check out protothreads — either the straight C version or the C++ equivalent.

12 November 2012 by Ben    5 comments


Save a Gmail message to hard disk

Often you want to save a formatted Gmail message to your hard disk.  Maybe it’s a license key or some design documentation that you want to save in version control.  You can print the whole conversation as a pdf, but not just one message.  And .html would be better than pdf.  Well, here’s how.

As you know, you can already save a plain-text message by selecting “show original” from the drop-down menu in the top-right of the email.  But it can look really ugly if it is an html message displayed in plain-text format.

And here’s the secret.  Once you get your “show original” window up, your address bar will contain something like this: https://mail.google.com/mail/u/0/?ui=2&ik=1234567890&view=om&th=1234abcd1234abcd

Just change the bit that says “view=om&th” to “view=lg&msg”.  Push <Enter>, and Bingo: it’s no longer plain text.  Now save it as .html or as pdf.

Alternatively, below is a bookmarklet that achieves the same thing. To use it, just drag the link below to your bookmarks bar. Then when you want to see the message as HTML, view the original, then click your “Gmail Original as HTML” bookmark button:

Gmail Original as HTML

Sometimes it’s the little things …

5 September 2012 by Berwyn    5 comments


µA/MHz is only the beginning

Low Energy MicrocontrollerPicking a low-power microcontroller is way more than just selecting the lowest-current device.  Low µA/MHz may be the least of your power worries in a mobile device.  Every man and his dog are now selling microcontrollers containing an ARM core.  And many are very much as good as ARM can make them at around 150µA/Mhz.  But the packaging around that core makes all the difference.  Let’s look inside an Energy Micro FM32 microcontroller to see what it takes for their EFM32TG110 to be ahead of the pack.  The keys are sleep mode and sensor power.

On a modern design you may have to sense a capacitive-touch keypad, sample ambient light level, listen to a serial port, monitor wireless communications and a swipe card.  If you have to wake up your CPU 1000 times a second to check all these sensors, your battery life is shot.  It’s the difference between a one-year non-rechargeable battery life and a 10-year life.  And that means the difference between having a user-replaceable battery or a never-replace battery – which comes down to a quality user experience.  And Steve Jobs has shown us that user experience is what sells products.

The answer is ultimate sleep and low energy peripherals.

There’s sleep and there’s SLEEP

To save power, any mobile device must spend most of its time in sleep mode – by long-established tradition.  So to start with, look for the current used in sleep mode. Our example EFM32 device has 5 different low power modes each enabling various functionality.  Its lowest-power nearly-dead mode draws only 20 nA (yes, really nano-amps), the real-time-clock as low as 0.5 µA on some parts, and highly functional modes only 1 µA.

The key thing to look for in sleep mode is fast start-up.  If sleep mode takes more than a couple of microseconds to start up, you’re wasting power that whole time.  Ideally, you want your software to sleep indefinitely every single iteration of the main loop.  It should only be woken up by I/O.  So your I/O all needs to be interrupt driven, and your processor needs to keep time without waking up.  Only time or I/O events should wake up the processor: don’t wake up every millisecond.  This has software implications: check out lightweight Protothreads or the Contiki OS.

The final key: low energy peripherals

Sleep mode is good, but by itself it is not enough of a critical power saver.  10-Year single-cell battery life, remember?  The real “killer feature” is low-energy peripherals.  Our EFM32 example boasts low-energy peripherals (LESense) that can run in the background while your processor stays in sleep mode.  While in sleep mode it can sense a capacitive slider or buttons, send UART serial data to/from memory, check light sensors, voltage levels, scale load cells, and so on.  This all happens while the processor is in sleep mode drawing just 1 µA.  No need to wake up to a hungry 150 µA/MHz.

There’s one more thing to look for: why not power your sensors only briefly? Turn them off in between measurements? For example, a capacitive touch button only needs to sense, say, 20 times a second.  Or to measure a weight threshold on a load cell would take an op-amp and comparator – but these only need to be powered up briefly several times a second.  Similarly, a voltage divider only needs to be connected up during comparator measurement.

Our the EFM32 microcontroller achieves this by having an array of analog comparators and op-amps that are powered up for a few microseconds every time the sensor is sampled.  It’s only when a finger is sensed on a button or a voltage level drops below a certain point, that the microcontroller needs to be woken up to decide what to do with the event.

We’ve been using the EFM32 as an example.  There may be others “out there” that do the same kind of thing.  But we haven’t found any yet.  In our minds, this places Energy Micro in a league of its own.  If you know of similar low-energy sensor technology in ARM microcontrollers, we’d love hear your comments.


Also see Low-Energy Mobile, the Nov issue of our newsletter or our other technical articles.

1 November 2011 by Berwyn    add a comment


Single-port USB3 docking

This made-for-mobile docking is a royalty-free idea for any laptop manufacturer to take up.  All I ask is that you give me one  :-).  Docking stations are bulky.  Why not make the laptop power supply be a single USB connector into the laptop.  It can supply power over this (special) USB connector’s power wires, and the power supply it can also be a USB3 hub for the room’s peripherals: second monitor and printer, etc.  When you’re in the office, just plug in power and you’re docked.  Plus, when you’re out of office, the same laptop port doubles as a spare USB3 port.

1 November 2011 by Berwyn    add a comment


Endian solution for C

Big or little end?

Are numbers stored big-end or little-end first?  C programmers have to byte-swap manually to deal with this.  Portable code becomes icky.  The full ickiness is illustrated in Intel’s excellent Endianness White Paper.

But in C there is no reason the compiler can’t do the hard work and make programs both portable and pretty.  Here we present quick hack, and also a solution requiring a minor change to the C compiler.  First the quick hack.

Quick Hack

Suppose we’re dealing with a USB protocol setup packet:

struct setup {
  char  bmRequestType,  bRequest;
  short  wValue,  wIndex,  wLength;
};

Since USB is little-endian, the short integers will work on a an X86 machine, but if you have the job of porting to a big-endian ARM, you’ll need to byte-swap each of these values every time they’re accessed.  This could be a lot of code rework.

One quick-n-dirty way to accomplish this is to simply store the USB packet in reverse order as it comes in your door (or write a reversal function).  Then define your struct in reverse order:

struct setup {
  short  wLength,  wIndex,  wValue;
  char  bRequest,  bmRequestType;
};

Note that this will only work if you don’t have strings in your struct, as C’s string library functions don’t expect strings to be in reverse order!

A Real Solution

The following solution has the C compiler dealing with the whole endian issue for you – making your program totally portable with zero effort.  It would require the addition of a single type modifier to the C compiler (C compilers already have various type modifiers).  To solve endian portability, this addition would be well worth it.

This concept is similar to the ‘signed’ and ‘unsigned’ access type modifiers or the GNU C compiler’s packed attribute which helps to access a protocol’s data by letting you prevent padding between structure elements.

Our example above would become:

struct setup {
  char  bmRequestType,  bRequest;
  little_endian  short  wValue, wIndex, wLength;
};

All that is needed is to be able to specify the endian nature in a type modifier: littleendian or bigendian.  In the above an X86 compiler would knows to ignore the modifier since it’s already little_endian.  But a big-endian ARM compiler would know to byte-swap for you upon reading or writing.

The same would work for pointers:

little_endian  long  *x,  *y;

Whenever x or y is accessed, the bytes are swapped by the compiler.  You can even cast a standard long pointer to a little_endian long to force a compiler byte-swap upon access.

Internally, the compiler would probably implement just a single byte-swap type modifier which it would apply to all non-native accesses.  But for portability and clarity, this should be spelled out as little_ or big_endian in the source.

It should also be noted that this same solution solves the endian problem for bit fields and bit masks (White Paper p12).

We are not C compiler gurus, so I’m not going to risk adding this change to my compiler.  But I put it “out there” for comment, and hopeful uptake.  I’m guessing this should go into GCC first, and then others will gradually follow suit.

1 November 2011 by Berwyn    2 comments