Template vs Text::Handlebars

A previous attempt at supporting Handlebars
In the comments of Where To Generate the HTML?, Tom Molesworth points out that, if you're going to be having a template that generates both on the client and server side, having two templates instead of one leads to more maintenance work.

This is most certainly true. In that case, it was just enough code to demonstrate a problem, not something I would expect to maintain. In fact, if users who weren't me were regularly finding that code, I'd be more more likely to delete it than support it. But the broader point; that there are things that I might create on the server side, then regenerate on the client side, and as I do more and more web work, I become more and more likely to want to do that.

Below I show the creation of the table and the creation of output in both Template Toolkit and Text::Handlebars. You will notice that you have a handle for each element in TT ( FOREACH row IN table being akin to for my $row ( @table ) ), while with Handlebars, you get this. The joke about JavaScript is that you want to throw up your hands and scream "this is B.S.!" but you're not sure what this is.

On the other hand, I've always been proud of making good, well-formatted HTML, and I do not like the formatting of the table that TT created. I've tried to make TT format more like I want it to look, and generally I just give up and make sure that it has everything my users, my code and I need of it instead. The Handlebars-generated HTML is exactly like I'd want my code to look.

Plus, Handlebars is cross-platform. I understand there's Jemplates to handle TT in Javascript, so strictly speaking, both are.

So, there's the code and the the rendered output. What do you think? What's your preference?


We Have The Facts And We're Voting Arduino.cc

You've seen my posts on Arduino before. It's a microcontroller, released with an Open Source license, that finally allowed me to transition from programming bits to programming atoms. OK, I need to gear up and get a servomotor or something, but the point remains.

For years, the place to go for Arduino downloads and documentation was arduino.cc. Things are now changing...

Arduino LLC is the company founded by [Massimo Banzi], [David Cuartielles], [David Mellis], [Tom Igoe] and [Gianluca Martino] in 2009 and is the owner of the Arduino trademark and gave us the designs, software, and community support that’s gotten the Arduino where it is. The boards were manufactured by a spinoff company, Smart Projects Srl, founded by the same [Gianluca Martino]. So far, so good.
Things got ugly in November when [Martino] and new CEO [Federico Musto] renamed Smart Projects to Arduino Srl and registered arduino.org (which is arguably a better domain name than the old arduino.cc). Whether or not this is a trademark infringement is waiting to be heard in the Massachusetts District Court.
According to this Italian Wired article, the cause of the split is that [Banzi] and the other three wanted to internationalize the brand and license production to other firms freely, while [Martino] and [Musto] at the company formerly known as Smart Projects want to list on the stock market and keep all production strictly in the Italian factory.
(quoted from Hackaday)

I'll repeat a line. Whether or not this is a trademark infringement is waiting to be heard in the Massachusetts District Court. It is a matter of law, not up to me. As the boards are Open Source, you are well within your rights to make your own, as we at Lafayettech Labs have considered. (We decided that the availability of inexpensive small boards at SparkFun and AdaFruit make our board redundant.) I have a few boards from non-arduino.cc origins, but I have and am glad to have some that are arduino.cc boards, supporting the project that's inspired me. (And I'm reasonably sure that those boards were actually manufactured by Smart Products Srl).

You are also within your rights to fork a project and make changes, which is how Arduino code got ported to TI boards with Energia, and which is what Arduino.org did.

Using the same name.

GitHub user probonopd reported an issue with the Arduino fork:
Rename this fork and use less confusing versioning
This fork of the Arduino IDE should be renamed something else to avoid confusion, and a version number should be chosen that does not suggest that this fork is "ahead of" the original Arduino IDE when in fact it is not. Naming open source projects should follow community best practices.
As for "arduino-org", one can read on https://github.com/arduino-org that "This organization has no public members. You must be a member to see who's a part of this organization". This is quite telling. The real Arduino team is at https://github.com/orgs/arduino/people as everyone can clearly see.
There is a difference between trying to fork a project and trying to hijack a project, and I think this is clearly what Arduino.org is trying to do. I urge everyone interested in Open Source to urge them to rename and re-version their fork, by making this issue known among their community and by firmly but politely agreeing with probonopd and his, like the over-300 users who already have.

Rename the fork!

Subroutine Signatures and Multimethods in Perl, Oh My!

This is a question I asked about on Google+ a while ago: How do you have multiple subroutines responding to the same name, distinguished only by the subroutine signature? It isn't yet a part of the experimental "signatures" feature that came in for 5.20 (to my understanding).

Turns out, there's a name for it. Multi-methods.

And it turns out there's a module in CPAN already: Class::Multimethods.

And, like so many very strange, very cool modules you don't know you need until you get there, it was written by Damian Conway.

The way I expect I'd handle it, I'd have each test() version handle the input to get what I need as input, then have an internal subroutine handle the munged input. But then again, I'm not sure. My mind is still blown by this, so I don't really know how to use it.

You'll excuse me; it took several years before I felt confident enough to take on the Schwartzian Transform, much less understand and use it, so I'm very happy to get this, even before knowing where or if I'd use it.

(I don't use unicode in the examples, but think that more people should know about the unicode features in newer Perl.)


Where to generate the HTML? A Test.

I was in a conversation during a developers event yesterday, and the other guy said something that surprised me. Sometimes, he said, he generates HTML on the server side, sends it to the client via AJAX (or XMLHttpRequest, or XHR, or however you refer to it), and has the JavaScript put that in in a fairly cooked way.

This contrasts with how I work. I generate initial pages on the server-side, sure, but if I'm going to change or add HTML after load, I'm going to either use jQuery or Mustache to transform the data to HTML, rather than pull large, formatted HTML.

The conversation was not very productive, because it felt like he became very defensive, but I had no intention of attacking his favored methodology, but rather understanding it.

So, this morning, I wrote a test where I created a large array of arrays and passed it to web pages either as JSON or as HTML, using Perl's Template Toolkit to create the HTML on the server side and Mustache.js on the client side. Code follows:

The result?

On the left is the JSON > Mustache route. On the right is the Template Toolkit version. The numbers are slightly different, but in terms of how it felt as a user, they had similar delays from clicking "Click Me". I'm sure the big issue for the browser is rendering a 30,000-cell table, not downloading it from the API or generating it from a template.

(The code is not someplace I'm happy linking to right now, so I might move and re-test it and post the link.)

It strikes me that this is very much a matter of optimizing for the developer, not for the platform. I'm not wrong in wanting to do it with Mustache (although I understand Handlebar.js would allow a loop within the loop, like Template does, which would work better for me), and the other guy is not wrong for letting PHP generate it for him.

Yes, I'm avoiding language wars and saying he's not wrong for using PHP.

But, this is one test. Does anyone have data to the contrary?


Thinking through Git in my work

This is mostly me thinking through some issues.

I use git and GitHub a fair amount. Not enough, not deeply enough, but a fair amount.

I understand Use Case #1, Compiled App, like RLangSyntax:

  • code in ~/dev/Project
  • subdirs (test/, lib/, examples/, docs/, etc) and documentation, license and build scripts in to git
  • git push origin master
  • compile code to binary, deploy the binary elsewhere (like GitHub's releases tab)
When someone wants to use this code, git clone Project gets you all you need to build it, except for compilers and associated libraries, which should be in README.md. (I forgot to put how to build into the RLangSyntax docs, then forgot how to build RLangSyntax. Forgive me, Admin, for I have sinned.)

Let's jump to Use Case #2, Perl Library, like Spark.pm.
  • code in ~/dev/Project
  • no build instructions; it's built
  • tests in ~/dev/Project/t/spark.t and the like (which this doesn't have, to my shame)
  • git push origin master
This is where I get confused. Eventually, this code needs to be in /production/lib, but you don't want to deploy using git pull or git clone, because you don't want /production/lib/Project/. Or, maybe you do and I just don't get it. Still, this is a case that I can do an acceptably wrong thing as required.

Use Case #3 is Perl command-line tools. We'll take on my twitter tool.
  • code in ~/dev/project
  • git push origin master
This begs about the same question as Perl Library. It could work to have ~production/bin/twitter.pl/twitter.pl, but then you have to expand your path to include every little thing. It gets more involved if you have libraries with executables in the repo, or the reverse, but let's get to the real hairball.

Use Case #4, the Hairball, is our web stuff.

  • ~web/ - document root
  • ~web/data - holds data that we want to make web-accessable
  • ~web/images - images for page headers and other universal things
  • ~web/lib - all our JavaScript
  • ~web/css - all our Cascading Style Sheets
  • ~web/cgi-bin - our server-side code
So, we might have the server-side code in ~web/cgi-bin/service/Project/foo.cgi, the client-side code in ~web/lib/Project/foo.js but maybe in ~web/lib/foo.js, the style in ~web/css/project.css and ~/web/css/service.css, and of course the libraries in ~production/lib/.

Maybe the key is to think of the ~web/lib and ~web/css as variations of Use Case #2, but the problem is that a lot of my JS code isn't general like the Perl code. I mean, wherever you want a sparkline, you can use Spark.pm, but the client code for ~/cgi-bin/service/Project/foo.cgi is going to mostly be very specific to foo.cgi except for the jQuery, Mustache and a few other things that are in use across services and projects.

A possible solution, having things be in ~web/service, with the JSON APIs in ~web/cgi-bin/service/apis and the JavaScript in either ~web/service/lib or ~/web/lib depending on how universal it is, but we lose certain abilities. I certainly have written CGI code which mostly puts out as little HTML as it needs to get the JS, which works for the small audience those tools need.

I mostly code foo.js in relation to foo.cgi or foo.html, but making tests and breaking it into pieces may keep me from having KLOC-size libraries I hate to work on in the future.

And here, we have departed from git into project and web architecture, and into best practices. Still, any suggestions would be gladly accepted.


Testing my API changes with Perl

I have an API. I like some things about it, but it was kinda clunky in many ways.

I rewrote it. I thought I had everything together, used the same SQL queries, but I wanted to be sure it kicked out the same output.

So, I wrote tests using Perl's Test::Most and cmp_deeply().

A few weeks ago, I knew Perl testing existed but I didn't know much about it. Now it's practical.


Diet, Exercise and Other Issues

Nerd Fitness posted "Why Exercise is the Least Important Part of the Equation", where Steve argues that getting your diet in line is the necessary first step to health and weight loss, in part because you're going to eat anyway, so eating something different is a small change.

Of course I understand and agree. I can show you my first year of documented weight loss, point out when I started doing Couch to 5K and when I ran walked a 5K, and note that the close-to-straight-line of 1 lb/week doesn't change.

Let's do that.

That red line is what's important. I started running at about day 170 and did the 5K at about 250. Maybe later. I can look it up if it's that important, but the key is, there's no dip in that trend line where I started and stopped "running". It was good for me, I'm glad I did it and need to do more, but it had zero to do with weight loss.

So, what did affect that weight loss?

Dunno for sure.

I have theories, sure, but I can't prove anything. Here's what it wasn't: Food Intake. For years, I only ate dinner. I ate it late, and soon after, I crashed out from exhaustion. Somewhere about six years ago, I started making sure I had lunch, storing a bunch of microwave meals in the office fridge just to make sure. I now do something similar with breakfast, too, and both those changes occurred several years before I started.

That's two of three daily meals that didn't change, and as I didn't try to force any diet changes on my family (and doubt it would've worked if I had tried), the third meal can't be counted as significantly different.

I say Diet Coke instead of diet cola because,
as the pic above shows, it truly was my drink,
from 1987 to 2011.
Honestly, I did two things that I credit for my weight loss:
  • I stopped drinking Diet Coke and started controlling caffeine intake
  • I started plotting my weight
The plot is shown above. The idea came from The Hacker's Diet, which I had heard of before but really understood enough to try after reading (part of) Tim Ferriss' Four Hour Body. I had no real expectation of change, but had a desire to familiarize myself with the statistical programming language, R, and needed an excuse and data set to do so. Once I had everything in place and started watching the numbers, I intentionally avoided making too many changes in the rest of my life, just to see how long the one pound a week drop would last.

The significant change is when I stopped drinking Diet Coke, and because there's so much involved there, I'm not sure what the significant parts are.

The first thing it might relate to goes to insulin. Theory goes, just like you're tricking your tongue with the sweet-but-not-sugar artificial sweeteners, you might be tricking your pancreas. All that insulin is created, doesn't find any real sweets to handle, so it goes crazy on whatever else you might have around. This is not widely accepted, as this LiveStrong article shows. Another LiveStrong article on diet cola shows that the effect of stopping varies wildly between people.

Let me introduce a vicious circle: You're exhausted, so you drink caffeinated beverages. You drink caffeinated beverages, so you sleep poorly. You sleep poorly, so you're exhausted. I lived in that cycle for years. It took a lot to get me to sleep, but when I slept, I slept so hard that, I'm told, all of my kids danced on my head while I slept. They were toddlers at the time, but still. Add to this hormonal and decision-making changes, you'll see the problem.

I've mostly switched to just drinking coffee and water — "Don't drink calories", Tim Ferriss wrote —  and not being dehydrated might be a factor, too. Plus maybe something else. I don't know.

Anyway, the 280 lb guy I used to be was just a switch away from Diet Coke to being the 213 lb guy I am today. There's diet switches I should do — the food that's easiest for me to eat at work is the easiest for me to buy a month at a time, decreasing my cognitive load, but not best food for me to eat every workday — but I doubt those changes would've meant much before I stopped drinking Diet Coke. 


BLE, in case you care

These nRF8001 Bluetooth Low Energy boards are from Adafruit, When I got them, a friend soldered on the header pins, and one didn't work. I started a long conversation with Adafruit — it's not conductive to debugging when you can only do it one evening every two weeks — but I finally got pictures into them, and they suggested resoldering the ground pins. The friend resoldered them and other pins, and now, I believe they're both going. 

Of course, I need to get some more code to write and some more bits to pump before I definitively say "Yeah", but it looks like I'll soon be able to move on to learning to write things that talk to them. 

On Android. In Java.

Or on Windows. 

Looking forward to that.


Look At My Pretty Pictures

The red parts of this image are a hypocycloid.
The blue parts of this image are an epicycloid.
The gap on the right side is an unsolved problem.
Perl blogger Gabor Szabo of Perl Maven asked me to show off my SVG Spirograph code, and so, here it, released as a Github Gist. I'm not quite to the point where I'm happy with it.

Years ago, as a child, I knew people with Spirographs, and I loved playing with them, making these crazy patterns with just pen, paper and geared plastic circles.

Years later, when I heard that Canvas support was being added to browsers, I developed tools in Javascript to draw these things. Nobody I knew had really any practical use for that skill, and so I kinda left it.

Until I started looking into laser cutters. Essentially, they're high-contrast printers, and they love vector graphics, using a thin red vector as an indication to cut rather than etch. So, when I saw Gabor write about the SVG module on Perl Maven, I knew what I must do.

Right now, I have three functions that generate these images, corresponding to epicycloid, hypocycloid and hypotrochoid curves, with two functions each providing the X and Y coordinates. In each, take a circle, rotate another circle around it (outside for epicycloids, inside for the others). If the point you're using is along the circumference, it's a cycloid, and if it's beyond that, it's an hypotrochoid, which, strictly speaking, could not be done with a Spirograph because the rings would get in the way. With math, Perl and SVG, this is not a problem.

SVG has the capability of drawing curves, but I assure you that the number of curves in the SVGs that created the images here are zero. If you draw lines between points, and those points are close enough, the lines look like curves, and the file sizes explode. On the other end, if you have the distance between points long enough, they are clearly lines, and it takes a large number of loops over to disguise the straight lines and get a thick curve. My code currently cycles through at 1/400ths of a degree at a time, which is fine for epicycloids and hypocycloids, but overkill for hypotrochoids.

Getting this code to handle B├ęzier curves and the like requires more math than I currently know, and might not actually be possible, but would certainly be good for reducing file sizes.

This is a number of  hypotrochoids stacked on top of each other,
cut into balsa wood. I love the 21st century.
The other problem my code has goes back to ensuring an end. Circles and ellipses are special cases of these things, and it is also the case that, with irrational numbers, that the thing will never complete, but instead look like a big black (or red, or blue) circle. So, we need to ensure that, eventually, it'll halt. If it goes over the same part twice, it makes the line thicker, which I don't want, but the problem is we're dealing with floating point, and 0.0000000000000000014 isn't mathematically equal to 0.0000000000000000015, but if we have several x and y coordinates in a row that are within 0.0000000000000000001 of previous pairs, we're probably looping over again. Eventually, I'll poke at that idea. As is, it loops 50,000 times, which is why the "both" image above isn't complete.

And, having an offset, to rotate these things as desired, would also be good. Code below, and also on Github.


Bug Report? Rant? Request for help? Running Bluetooth Low Energy from Arduino

I'm with a group working on a thing. What that thing will do is not important to this discussion at the moment. What is important is that I'm expecting a flood of data from the thing (think QS/IoT) to a phone. Plus, I'll have a second thing, doing the same thing.

The current demo hardware is using wireless ethernet, which is proving slightly problematic. It's built with the idea of being used by hobbyists who want to put the chip in a thing, not for a group of many developers to prototype and develop on the hardware. I'm also unsure about the ease of setting up WiFi Direct on two devices with simplified user interfaces, and whether it'll block your phone.

So, I'm looking into Bluetooth Low Energy. I got two BLE breakout boards from Adafruit, had a friend solder in the header pins, and hooked everything up according to the docset with my Arduino Uno R2. I installed the Nordic UART app on my phone and got bi-directional communication going from my phone to my laptop and vice versa.

Then, I tried to wire up my $9 Arduino, a Leonardo clone. No go. (I did it wrong. More later.)

Then, I bought an Arduino Uno R3 at half-price from Radio Shack. No go.

Then, I borrowed an Arduino Mega and started to look into the pins. By the code, the pins not set in software are the ones providing the breakout board with CLK, MISO and MOSI. This is my first time talking to a breakout board, so I'm a neophyte to the ways of SPI. The Leonardo moved the SPI controls to another set of pins, so now that I'm slightly less dumb, I get why that failed. But the fact that the Mega and Uno R3 failed still confuses me. I timed out at midnight — had more things to do the next day than time — so I don't know if I'm stupid or what, but I had others verify I had the wiring right, and I think I plugged the second board into the first breadboard, just to ensure both were working.

I think.

Ultimately, getting this working with my R3 (or whatever) is a means to an end. It gives me behavior similar to what I expect from the things, so I can begin to work on the part that's receiving. I want to listen to two BLE devices at once. Or four. Or five. Or more. These things will be our things, perhaps coded in Arduino or not, but will certainly not be Leonardo or Uno boards, so understanding how SPI works differently on these boards is ultimately useless.

Except it absolutely must happen to move on to the next step.

Contributing to this frustration is the fact that I don't know enough about the problem domain to ask an intelligent question. I'm coming into the world of Arduino as a programmer, and in essence, what I have are variables I can set high or low, or sometimes somewhere in between, and variables I can check that are high or low, or sometimes somewhere in between. Between the set and ground, or 5V and checking is a circuit, and there's a lot about wires and resistors and the process of handing variables around that I just do not understand. With the Leonardo, the step after next (after being sure the second board works) would be to find female-to-female jumper wires, as all I have are male-to-male, then wiring to those SPI pins should be cake.

But it strikes me that someone else has failed to get this working on R3 Unos before, and I'm not seeing that person's cries for help, maybe because I'm not looking in the right place. Neither Stack Overflow nor the EE Stack Exchange site seem to really be good places, and as useful as the Adafruit Learn page is (written by the man who wrote the book on BLE), there's no way to engage back, to get clarification.

Anyway, I'll beat some more on it this evening. We'll see how far I get.