Cookie Notice

As far as I know, and as far as I remember, nothing in this page does anything with Cookies.


Log your machines and Check your logs

"Logs" by Aapo Haapanen is licensed under CC BY 2.0

Our VMs were having problems last fall. Their connections to the file system would falter, causing a large number of processes sitting around waiting to write. The symptom we found was that the load averages would then rise incredibly high.

Like four-digits high.

So, I wrote something that would log load average once an hour. It was a convergence of lab need and an excuse to learn Log::Log4Perl. I also used Pushover to tell me when load average was greater than 20, as if I could do anything about it.

Below, Mail is a wrapper around Email::Sender::Simple and Pushover around LWP::UserAgent that handle the formatting and authentication. Neither are necessary for the logging.

#!/usr/bin/env perl

# checks for high load average on a host using uptime and
# reports using Pushover
# logs uptime, high or low, via Log4perl

# Also sends result of ps to email to start to indicate what's
# actually doing something


use feature qw{ say } ;
use strict ;
use warnings ;
use utf8 ;

use Data::Dumper ;
use DateTime ;
use IO::Interactive qw{ interactive } ;
use Log::Log4perl ;

use lib '/home/djacoby/lib' ;
use Mail ;
use Pushover ;

# my $host = $ENV{HOSTNAME} ;
my $host = `/bin/hostname -s ` ;
chomp $host ;

Log::Log4perl::init( '/home/djacoby/.log4perl.conf') ;
my $logger = Log::Log4perl::get_logger( 'varlogrant.uptime' );
my @uptime = check_uptime() ;
$logger->trace( qq{$host : $uptime[0] $uptime[1] $uptime[2]});

if ( $uptime[0] > 20 ) {
    my $ps = process_table() ;
    my $message ;
    $message->{ message } = "High Load Average on $host: " . join ' ' , @uptime ;
    my $out = pushover( $message ) ;
    #send_table( join "\n\n" , ( join ' ' , @uptime ) , $ps ) ;

exit ;

sub check_uptime {
    my $program = '/usr/bin/uptime' ;
    my $uptime = qx{$program} ;
    my @num = map {s/,//;$_ } ( split /\s+/ , $uptime )[-3,-2,-1] ;
    return @num ;

sub process_table {
    my $out = qx{/bin/ps -U gcore -u gcore u } ;
    return $out ;

sub send_table {
    my $body = shift ;
    my $date = DateTime->now()->iso8601() ;
    my $msg;
    $msg->{ identity } = 'example' ;
    $msg->{ subject } = qq{High Load on $host: $date} ;
    $msg->{ to } = '' ;
    $msg->{ body } = $body ;
    $msg->{ attachments } = [] ;
    send_mail($msg) ;

Eventually, those issues worked out. The evidence of file system hinkiness is that, on occasion, we try to save or open a file, it takes a few minutes — I have learned from experience that mkdir does not display atomicity — but we never see the high load averages and catastrophically long file access times of a few months ago.

But the logging never left my crontab.

I started looking at and playing with new things, and I wrote an API that allowed me to curl from several machines once an hour, and I would get Pushover notifications when machines were down.

(You can really thank Phil Sturgeon and his excellent book, Build APIs You Won't Hate, for that. I'm not quite there with my API, though. It'd probably make an interesting blog post, but it's built on pre-MVC technology.)

(And yes, I really like Pushover. In general, I turn off notifications for most apps and only pay attention to what I have Pushover tell me.)

Anyway, I'd get notifications telling me my web server is down, then pull out my phone and find the web server up and responsive. I'm putting that into MySQL, so a query told me that, on some hours, I'd get five heartbeats, some four, and some 3, so I was sure that the issue wasn't with the API.

I log and get Pushover notifications set at the @reboot section of my crontab, and that hadn't warned me recently, so I knew the machines were up, but not responding.

Then I remembered that I never stopped monitoring load averages, and started looking at those logs.

#!/usr/bin/env perl

# reads and parses the uptime log

use feature qw{ say } ;
use strict ; use warnings ; use utf8 ; use DateTime ; my $file = q{/home/jacoby/mnt/rcac/.uptime.log} ; if ( -f $file && open my $fh, '<', $file ) { my $data ; my $last ; my $obj ; while (<$fh>) { chomp ; my ( $date, $time, $server ) = split m{\s+} ; next unless $server ; # say $server ; my ( $year, $month, $day ) = split m{/}, $date ; my ( $hour, $minute, $second ) = split m{:}, $time ; my $latest = DateTime->new( year => $year, month => $month, day => $day, hour => $hour, minute => $minute, second => 0, time_zone => 'UTC', ) ; my $diff = 0 ; # next if $year != 2016 ; # next if $month < 7 ; my $ymd = $latest->ymd ; my $hms = $latest->hms ; next if $ymd !~ /^2016-07/ ; push @{ $obj->{$ymd}{$hms} }, $server ; } my @hosts = sort qw{ genomics genomics-test genomics-apps genomics-db } ; for my $y ( sort keys %$obj ) { my $day = $obj->{$y} ; for my $h ( sort keys %$day ) { my @list = @{ $obj->{$y}{$h} } ; my %list = map { $_ => 1 } @list ; my @down = grep { !$list{$_} } @hosts ; next if !scalar @down ; say join ' ', $y, $h, @down ; } } } __DATA__ two days results: 2016-07-23 01:00:00 genomics-test 2016-07-23 02:00:00 genomics-test 2016-07-23 08:00:00 genomics genomics-db 2016-07-23 13:00:00 genomics-apps 2016-07-23 16:00:00 genomics-apps 2016-07-23 18:00:00 genomics-db 2016-07-23 19:00:00 genomics-test 2016-07-23 21:00:00 genomics-apps 2016-07-24 05:00:00 genomics genomics-apps 2016-07-24 07:00:00 genomics 2016-07-24 10:00:00 genomics-apps 2016-07-24 10:01:00 genomics genomics-db genomics-test 2016-07-24 11:00:00 genomics-db 2016-07-24 13:00:00 genomics-apps 2016-07-24 18:00:00 genomics genomics-apps 2016-07-24 23:00:00 genomics genomics-apps genomics-db

We see above that of the four VMs I monitor, all four fail to log multiple times, and many times, three of four VMs fail to run their crontabs. Since I had something more solid than "Hey, that's funny", I went to my admins about this. Looks like VMs are failing to authenticate with the LDAP server. My admins are taking it up the chain.

So, beyond how I make and parse logs, which might not be the best examples you can find, the message here is that it's hard to identify a problem unless you're tracking it, and even tracking something else might help you identify a problem.


Net::Twitter Cookbook: How to Tweet

The first line between Twitter Use and Twitter Obsession is TweetDeck. That's when the update-on-demand single-thread of the web page gives way to multiple constantly-updated streams of the stream-of-consciousness ramblings of the Internet.

That's the first line.

The second line between Twitter use and Twitter obsession is when you want to automate the work. If you're an R person, that's twitteR. If you work in Python, that's tweepy.

And, if you're like me, and you normally use Perl, we're talking Net::Twitter.

What follows is the simplest possible Net::Twitter program.

#!/usr/bin/env perl
use feature qw{ say } ;
use strict ;
use warnings ;
use Net::Twitter ;

# the consumer key and secret identify you as a service. 
# you register your service at
# and receive the key and secret

# you really don't want to have these written into your script

my $consumer_key    = 'ckckckckckckckckckckck' ;
my $consumer_secret = 'cscscscscscscscscscscscscscscscscscscscs' ;

my $twit = Net::Twitter->new(
    traits          => [qw/API::RESTv1_1/],
    consumer_key    => $consumer_key,
    consumer_secret => $consumer_secret,
    ssl             => 1,
    ) ;

# the access token and secret identify you as a user.
# the registration process takes place below.
# the first time you run this program, you will not be authorized,
# and the program will give you a URL to open in a browser where
# you are already logged into twitter.

# you really don't want to have these written into your script

my $access_token = '1111111111-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' ;
my $access_token_secret = 'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz' ;

$twit->access_token($access_token) ;
$twit->access_token_secret($access_token_secret) ;

# everything interesting will occur inside this if statement
if ( $twit->authorized ) {
    if ( $twit->update('Hello World!') ) {
        say 'It worked!' ;
    else {
        say 'Fail' ;
else {
    # You have no auth token
    # go to the auth website.
    # they'll ask you if you wanna do this, then give you a PIN
    # input it here and it'll register you.
    # then save your token vals.

    say "Authorize this app at ", $twit->get_authorization_url,
        ' and enter the PIN#' ;
    my $pin = <stdin> ;    # wait for input
    chomp $pin ;
    my ( $access_token, $access_token_secret, $user_id, $screen_name ) =
        $twit->request_access_token( verifier => $pin ) ;

    say 'The following lines need to be copied and pasted above' ;
    say $access_token ;
    say $access_token_secret ;

Again, this is as simple as we can reasonably do, without pulling the keys into a separate file, which I, again, strongly recommend you do. (I personally use YAML as the way I store and restore data such as access tokens and consumer keys. I will demonstrate that in a later post.)


Personal Programming Plans: Instagram2Background

I have this code which uses WebService::Instagram to grab my most recent picture from Instagram and set it as background image on my Ubuntu machines. I put it in my crontab and it just works.

#!/usr/bin/env perl

use feature qw'say' ;
use strict ;
use warnings ;

use Cwd 'abs_path' ;
use Data::Dumper ;
use IO::Interactive qw{ interactive } ;
use Try::Tiny ;
use YAML::XS qw{ LoadFile DumpFile } ;

use lib '/home/jacoby/lib' ;
use Instagram ;

my $config_file = join '/', $ENV{HOME}, '.i2b.yml' ;
my $config = LoadFile($config_file) ;

my $token    = $config->{user}{access_token} ;
my $id       = $config->{user}{id} ;
my $template = '' ;
my $ig       = connect($config) ;

$ig->set_access_token($token) ;
my $url = $template ;
$url =~ s/XX/$id/ ;
my $feed        = $ig->request($url) ;
my $data        = $feed->{data} ;
my @data        = grep { $_->{type} eq 'image' } @$data ;
my $most_recent = $data[0] ;
my $file        = '/home/jacoby/.i2b/i2b.jpg' ;

my $image_id    = $most_recent->{id} ;
my $image_url   = $most_recent->{images}->{standard_resolution}->{url} ;
my $image_text  = $most_recent->{caption}->{text} ;

if ( $config->{done} ne $image_id ) {
    my $image = imagegrab($image_url) ;
    imagewrite( $file, $image ) ;
    say {interactive} $image_id ;
    say {interactive} $image_text ;
    $config->{done} = $image_id ;
imageset($file) ;

DumpFile( $config_file, $config ) ;

exit ;

# takes a URL, returns the raw content
sub imagegrab {
    my $url      = shift ;
    my $agent    = new LWP::UserAgent ;
    my $request  = new HTTP::Request( 'GET', $url ) ;
    my $response = $agent->request($request) ;
    if ( $response->is_success ) {
        return $response->content ;
    return undef ;

# takes an filename and an image, and writes image to filename
sub imagewrite {
    my $file  = shift ;
    my $image = shift ;
    if ( open my $fh, '>', $file ) {
        print $fh $image ;
        close $fh ;
        return 1 ;
    return 0 ;

# takes a filename, sets it as backgroundimages
sub imageset {
    my $img = shift ;

    return unless $img ;
    my $command = join ' ', qw{
        gsettings set
        } ;
    my $command2 = join ' ', qw{
        gsettings set
        } ;

    my $bg = '"file://' . abs_path $img . '"' ;
    qx{$command $bg} ;
    qx{$command2 'zoom'} ;

With IFTTT, it's even easier on Android.

But I don't spend all my time with just Android and Ubuntu. I spend a fair amount of time in Windows. I have a start with that: I can use C# to set an image to the background. This is the first step. I know, at least a little, about scheduling tasks in Windows, which is the last step.

So, the coming steps:

  • Using C# to get the Instagram JSON
  • Using C# to parse the Instagram JSON and pull URL of newest image
  • Using C# to download said image. Certainly related to getting the JSON.
  • Using C# to write image file to Windows equivalent to /tmp (because this will be released to others).
  • Knowing what Windows for /tmp is.
  • Knowing where to hold my Instagram OAuth token data.
Seems like a small number of things to get done, once I sit down to do them. I just need to sit down, do them, and build the tool again.


Quantified Self: For What?

This is my daily step count since I first got a FitBit in 2012, in handy heatmap form.

It shows that 2014 was a pretty active year.

It shows that this year, I've really fallen off the game.

It shows that the main purpose of this process for me, of learning how to grab the data and plot it in different and hopefully useful ways, has succeeded.

It shows, really, that I'm much more about collecting the data than using it to change my life.

And I can only see that as a failure.

I've built other things on top of this. My daily steps pop up in my Twitter feed and bash prompt. If I my battery gets low, I get notified on my tablet. If I go several days without a connection (if the battery dies without me noticing, or if I lose it, as I have done recently), I also get notified. I've made it very convenient to me.

But I failed to make greater amounts of movement an important part of my life. I failed to develop an appreciation for running or walking, at least in comparison to everything else I do.

So, I need to start thinking about how I can change my behavior.

And I probably shouldn't get a replacement FitBit until I have a plan for that.


My Reason to hate Python might be GONE!

Let me show you some code.

#!/usr/bin/env python

mylist = [0,1,2,3]

for n in mylist:
 for  m in mylist:
  print m,n
    print m,n
 print n

Looks pretty normal, right? Just a loop, right? Just a loop within a loop.

Yes it is, but if you look closer, you'll notice two spaces in front of the second print statement.

This is exactly what happened to me the first time I tried Python, about 15 years ago. It was code that showed open machines in ENAD 302, and I ran it on an NT machine I had installed ActiveState Python on. I no longer have that job, thus no longer have that machine and that version of Python. I no longer can find the code, and the computer lab in ENAD 302 is gone.

As is ENAD.

All I have is the memory of having pages of error reports that didn't tell me that the problem was that, halfway into a 200-or-so line Python program. This has lead me to set expandtab or the equivalent for every editor I've used since. Burn me once, shame on you, but burn me twice...

I admit that disliked Python before that, but then it was more "Perl does this already, so why do I have to learn how to do the exact same thing in another dynamic language? What do I gain?" rather than "This language takes as a core feature a means to create undetectable errors."

But no. My hatred of Python stopped coming from a logical place. "Creates undetectable errors" is a logical argument, one that is no longer true, but I got taken to a place of negative emotion, like someone who was bitten by a dog as a child and now is overcome with fear or hate when one comes up now.

(I tried it a few times since, and each time, my experience said "this is an objectively stupid way of doing things", until I bumped into things like Sikuli or my FitBit code where there was either no other way or this was the easiest way to get to "working".)

Then I find someone online who says "tabs are better than spaces". For outputting formatted text, I do agree, but in code, that leads to invisible bugs. So, when someone is wrong online, you correct them.

But then, I wrote the above code, expecting the same errors and received none.

(In this process, I learned some interesting things about Sublime Text, like the way you can set color_scheme and draw_white_space and translate_tabs_to_spaces at a language-specific level, which I did to allow me to see the white space when writing the above code. Sublime Text is neat.)

I've been saying this for a while, but I think this is the last thing I needed to find out before I lived it: the Python that's here today is not the Python that bit me 15 years ago, and I should get over my hangups and "pet the dog". 


Let Them Fight: My Thoughts on #Googacle

It's wonderful to have the Oracle vs Google trial in San Francisco, so I can have the mental image of Google's Bugdroid and Java's Duke laying waste to the city like Godzilla and the MUTOs. Because, ultimately, that's what this is; two kaiju companies fighting it out at tremendous cost, and a man in a black robe taking the Ken Watanabe role saying "Let Them Fight".

Please, someone who can draw, put this up on DeviantArt. I need to see this.

I've said "I'm biased; I like every Google product I've worked with, and hate every Oracle product I've worked with." But this isn't true, because, on the one hand, VirtualBox and MySQL, and on the other, Google Wave.


But I admit my biases, and I do question them. Sun had the philosophy of "Software is free, hardware pays the bills", and licensed in accordance with that. This is why, after Oracle bought Sun, the Sun team in charge of MySQL could fork the GPLd code, leave to form MariaDB after (as I understand it) little more than a name change, and leave Oracle barely maintaining a direct competitor to their core product.

Sun open-sourced Java. Soon after I started CS, it became the language with which programming is taught at the college level. I think this is stupid, because the main benefit of Java is "write once, run anywhere", which is a direct response to the Unix Wars, where companies would make small weird incompatible changes to differentiate their kit from their competitors. Linux won the Unix wars, and now, you make one ELF-formatted executable and package it in DEB or RPM and you basically have 97.3% of server rooms, or more, once you factor in virtual machines.

"Write once, run anywhere" is a dead concept.

Java is still a core language, though, and Google, moving into a new, untested environment with a new, untested operating system, wanted something that programmers would feel comfortable with, so they went with Java.

But Java runtimes, as they existed at that time, were not up to the task, and they chose to re-implement Java, or at least a small subset of Java, so it behaved like Java to the Java devs they wanted to be Android devs.

This is where the question is. Oracle says "You didn't license it". Google says "We did license it; it's called the GPL". Or, at least, that's my understanding of the arguments; a big lesson of this trial is that developers shouldn't talk like lawyers and lawyers shouldn't talk like developers; that way lies to legal troubles.

The GPL is what makes Linux free, and so much else. There's a LAMP stack (Linux, Apache, MySQL, PHP*) that allowed so much of the changes in the last 20 years. Without LAMP, without GPL, there's no Amazon, no Google, no Facebook.

(Let's pretend, for the purposes of this rant, that this is all good, okay?)

This is a battle between kaiju. Google cares about me as little as Godzilla cares about Elizabeth Olsen. But we still want Godzilla to defeat the MUTOs and Mecha-Godzilla and whatever comes up, and, because it uses as tools the things I associate with freedom, I still want Godzilla ... I mean Google, to win.


Death of a Project

Years ago, I learned some R. When I was doing so, I had decided to move from just being a vi man to trying something a little more modern, so I was using ActiveState's KomodoEdit.

A problem was that KomodoEdit had syntax highlighting for many languages, but not R. So, I did some digging, found some code that did what I wanted that someone else abandoned, adapted it some and made it work. Then I made it a GitHub repo called RLangSyntax. I had an itch, I scratched it, I made the backscratcher available to others and I went on with my life, eventually moving to SublimeText.

Until ActiveState released Komodo(IDE|Edit) 9, and I started getting issues in my repo. Those issues were solved by incrementing maxVersion from 8 to 9, and then by remembering and documenting the build process and adding a release download of the resulting xpi. And, because I've gone on to other languages and editors, I left it.

Until ActiveState released KomodoIDE X for 10. I figured that, as with 9, I'd eventually get issues, and so I decided to jump ahead of it. I installed KomodoIDE, tried to install RLangSyntax to see what the errors were. I contacted the support team and asked them a few questions, like "can I just assume Komodo will keep the same engine for syntax highlighting, so I can set maxVersion as 20 or something and let it go?"

Here I wish to say that, for both the 8->9 and 9->X conversions, the Komodo support team was helpful, friendly and intense. A bit more intense than I like, but being helpful and friendly leads me to forgive a lot. I use Sublime Text and vi for my editing needs, but I certainly feel that Komodo could work for me. I like how KomodoIDE takes it's cue from Sublime Text and Atom.

But I make the minimum changes to get it to install and start waiting for the automated checking of their version of Package Manager will take place, and I get this comment:

"Hey guys, I do want to point out that as of Komodo 9.3 we have built-in support for R lang syntax."


I'm happy that happened. I really am. R is really not a thing I touch anymore. I look at ggplot2 and think "Hey, I could make really pretty plots with that", but I don't have reason to make new plots or change old plots right now. And, I'm very happy with my Sublime Text environment. This is an important change for both R and KomodoIDE, and I'm happy.

But, of all the toys I released on GitHub, this is the one that had the most interaction, which implies the most use. One measure of value as a programmer is the number of users your code has, and the change, while of use to those who code in R with KomodoIDE, makes me a programmer of lesser import. But, really, not so much, because beyond packaging and iterating maxVersion, I didn't add much.

So, that repo's documentation now reads: "You probably don't want to use this project. So long, and thanks for the fish."


Did I Mention I Hate Default Mail Notifications?

We live in a world of spam, of free email accounts and large mailing lists. You do not want to enable promiscuous notifications in such a world. That way lies madness.

But never knowing that the important people in your life — those you love, those who pay you, those who fix your guitars — are trying to contact you because you've turned off notifications is madness also. Perhaps a worse madness.

But Perl exists. CPAN exists. There is a way out.

I wrote a program for more general-purpose mail-handling, mostly clearing spam out of my work accounts, but decided to rewrite in order to handle the act of warning that I had new mail.


# specialized version of imap_task that handles just warnings. 
# problem with previous attempts is that it kept warning about
# new mail that matched until it was marked it read or deleted

# the goal is to do things once, with a data store independent 
# from IMAP that indicates if the warning has been sent. 

# YAML? JSON? Mongo? We'll try YAML.

use feature qw'say state' ;
use strict ;
use warnings ;
use utf8 ;

use Carp ;
use DateTime ;
use DateTime::Duration ;
use DateTime::Format::DateParse ;
use Getopt::Long ;
use IO::Interactive qw{interactive} ;
use IO::Socket::SSL ;
use Mail::IMAPClient ;
use YAML::XS qw{ LoadFile DumpFile } ;

use lib '/home/jacoby/lib' ;
use Locked ;
use Notify qw{ notify } ;
use Pushover ;
use Say qw{ say_message } ;

I have gone to perlbrew for most of my usable Perl, and I would normally use #!/usr/bin/env perl as my hashbang, but it is hard to tell crontab to use a perl other than system perl, so rather than trying, I specify the perl I want. Your usage will vary.

The next four non-code lines are my standard. That's mostly use Modern::Perl, I think, but I like being able to specify.

After that, there's a bunch of modules from CPAN. IO::Socket::SSL and Mail::IMAPClient are crucial for interacting with the mail server, YAML::XS is the better YAML module, according to Gabor Szabo. I like programs that give me verbose output when I run them, but don't clog my cron inbox when run via crontab, so I really overuse IO::Interactive. I am not sure that I need all the DateTime stuff I load for this purpose, but better safe than sorry.

Then there's the stuff that I wrote for purposes such as this. I have many programs that I want to behave differently if the computer is locked, which means I'm not at my standing desk, so I wrote Locked. I wanted to use notify-send on my Ubuntu machines to pop up notifications, so I wrote Notify. Net::Pushover wasn't written when I started this, so I wrote Pushover to interact with Pushover and should've put it on CPAN myself. Alas. And Say doesn't have to do with say(), but rather is a wrapper around eSpeak, a speech synthesizer.

my @sender ;
my $debug = 0 ;
my $task ;
$task = 'work_alert' ;

    'debug=i' => \$debug,
    # 'task=s'  => \$task,
    or exit(1) ;
# get the configuration
my $config_file = $ENV{HOME} . '/.imap/' . $task . '.yml' ;
croak 'No task set'  if length $task < 1 ;
croak 'No task file' if !-f $config_file ;

my $settings = LoadFile($config_file) ;
$settings->{debug} = $debug ;

# set a message if one hasn't been set
$settings->{message} = $settings->{message} ? $settings->{message} : 'You have mail' ;

my $has_spoken = 0 ;

say {interactive} '='x20;
my $warn_file   = $ENV{HOME} . '/.imap_warn.yml' ;
my $warnings = LoadFile($warn_file) ;
check_imap($settings) ;
DumpFile( $warn_file , $warnings ) ;
say {interactive} '-'x20;
exit ;

Here I establish a bunch of globals and everything up for check_imap(), the main part of this program.

There are two YAML files that this program uses. One is .imap_warn.yml, which is a hash where the key is "$FROM||$SUBJECT||$DATE" and the value is 1, so I can tell if I've been told about a certain email before, and .imap/work_alert.yml, which is the main configuration file, and looks like this:

port: 993
username: username
password: you_dont_get_my_password
message: 'You have mail'
                - 'big data'
                # Family
                - jacoby
                # The Lab

I have used a separate file to hold the specifications for my SMTP and IMAP servers, but here, having all the config in one place seemed right. Since it contains password information, it is especially important that permissions are set correctly, specifically only you can read it. I do not test permissions in this program.

As mentioned, this is adapted from a more general mail-handling program, which takes specific configuration files for the kind of work it does. This just has the one, so that has been commented out, leaving just the debug flag.

I have had issues with YAML empty-writing files, which is why I separated .imap_warn from
sub check_imap {
    my $settings = shift ;
    my $client ;
    if ( $settings->{port} == 993 ) {

        my $socket = IO::Socket::SSL->new(
            PeerAddr => $settings->{server},
            PeerPort => $settings->{port},
            or die "socket(): $@" ;

        $client = Mail::IMAPClient->new(
            Socket   => $socket,
            User     => $settings->{username},
            Password => $settings->{password},
            or die "new(): $@" ;
    elsif ( $settings->{port} == 587 ) {
        $client = Mail::IMAPClient->new(
            Server   => $settings->{server},
            User     => $settings->{username},
            Password => $settings->{password},
            or die "new(): $@" ;

    my $dispatch ;
    $dispatch->{'alert'}          = \&alert_and_store_mail ;
    $dispatch->{'warn'}           = \&warn_mail ;

    if ( $client->IsAuthenticated() ) {
        say {interactive} 'STARTING' ;

        for my $folder ( keys %{ $settings->{folders} } ) {
            say {interactive} join ' ', ( '+' x 5 ), $folder ;
                or die "Select '$folder' error: ",
                $client->LastError, "\n" ;

            my $actions = $settings->{folders}->{$folder} ;

            for my $msg ( reverse $client->unseen ) {
                my $from = $client->get_header( $msg, 'From' ) || '' ;
                my $to   = $client->get_header( $msg, 'To' )   || '' ;
                my $cc   = $client->get_header( $msg, 'Cc' )   || '' ;
                my $subject = $client->subject($msg) || '' ;

                say {interactive} 'F: ' . $from ;
                say {interactive} 'S: ' . $subject ;

                # say { interactive } 'T: ' . $to ;
                # say { interactive } 'C: ' . $cc ;

                for my $action ( keys %$actions ) {

                    # say { interactive } '     for action: ' . $action ;

                    for my $key ( @{ $actions->{$action}->{from} } ) {
                        if (   defined $key
                            && $from =~ m{$key}i
                            && $dispatch->{$action} ) {
                            $dispatch->{$action}->( $client, $msg ) ;
                    for my $key ( @{ $actions->{$action}->{to} } ) {
                        if ( $to =~ m{$key}i && $dispatch->{$action} ) {
                            $dispatch->{$action}->( $client, $msg ) ;
                    for my $key ( @{ $actions->{$action}->{cc} } ) {
                        if ( $cc =~ m{$key}i && $dispatch->{$action} ) {
                            $dispatch->{$action}->( $client, $msg ) ;
                    for my $key ( @{ $actions->{$action}->{subject} } ) {
                        my $match = $subject =~ m{$key}i ;
                        if ( $subject =~ m{$key}i && $dispatch->{$action} ) {
                            $dispatch->{$action}->( $client, $msg ) ;
                say {interactive} '' ;

            say {interactive} join ' ', ( '-' x 5 ), $folder ;

   # $client->close() is needed to make deletes delete, put putting before the
   # logout stops the process.
        $client->close ;
        $client->logout() ;
        say {interactive} 'Finishing' ;
    say {interactive} 'Bye' ;

There are four things we can match on: from, to, cc and subject. I generally match on subject and from, but the code is there.

I have started but not finished Higher Order Perl by Mark Jason Dominus, but one of the things I got from that book (I think; if not, then from co-workers) is the concept of a dispatch table, where behavior of the program changes based on the data. I could simplify this a lot more, I'm sure, with more higher-order programming, but I'm reasonably happy with it right now.

# ====================================================================
# send to STDOUT without IO::Interactive, for testing
sub warn_mail {
    my ( $client, $msg ) = @_ ;
    say {interactive} 'warn' ;
    my $from = $client->get_header( $msg, 'From' ) || return ;
    my $to   = $client->get_header( $msg, 'To' )   || return ;
    my $subject = $client->subject($msg) || return ;
    my $date  = $client->get_header( $msg, 'Date' ) || return ;
    my $dt    = DateTime::Format::DateParse->parse_datetime($date) ;
    my $today = DateTime->now() ;
    $dt->set_time_zone('UTC') ;
    $today->set_time_zone('UTC') ;
    my $delta = $today->delta_days($dt)->in_units('days') ;
    say $from ;
    say $to ;
    say $subject ;
    say $dt->ymd ;
    say $delta ;

# ====================================================================
# alert about new mail
sub alert_and_store_mail {
    my ( $client, $msg ) = @_ ;
    say {interactive} 'alert and store' ;
    my $date = $client->get_header( $msg, 'Date' ) || 'NONE' ;
    my $from = $client->get_header( $msg, 'From' ) || 'NONE' ;
    my $to   = $client->get_header( $msg, 'To' )   || 'NONE' ;
    my $subject = $client->subject($msg) || 'NONE' ;
    my $key = join '||' , $from , $subject , $date ;
    $key =~ s{\s+}{ }g ;
    my $title =  'Mail From: ' . $from ;
    chomp $title ;
    chomp $subject ;

    return if $warnings->{$key} ;
    $warnings->{$key} = 1 ;

    $from =~ s{\"}{}gx ;
    if ( is_locked() ) {
            {   title   => $title ,
                message => $subject
            ) ;
    else {
        say {interactive} $title  ;
        say {interactive} $subject ;
        say {interactive} defined $warnings->{$key} ? 1 : 0 ;
        say {interactive} 'has spoken: ' . $has_spoken ;
        if ( ! $has_spoken ) {
            say_message( { message => $settings->{message} , title => '' } ) ;
            {   title   => $title ,
                message => $subject ,
                icon    => '/home/jacoby/Dropbox/Photos/Icons/mail.png' ,
            ) ;
    $has_spoken = 1 ;
    return ;

warn() is useful for debugging, but the work of the program is done in alert_and_store_mail(). $client is the IMAP connection, and $msg is the message itself. I find that I have to send both. I might be doing it wrong, though.

And here is where my modules come in. is_locked() returns a boolean, depending on which way you lock your screens. say_message(), pushover() and notify() share a format, a hashref containing title and message. say_message() tells me that I have notifications coming, and they show up on my desktop. And if I'm away from my desk, they show up on my tablet because of Pushover.

I'll put this into a repo on GitHub, including all the modules. I would like to get this into shape to be something like App::imap_warn or the like, but I'm not there yet. I'm sure there's interest, because default notifications suck.


Purdue Perl Mongers - April 13 - "Starship Mongers"

I wrote a quick five-minute counter in Javascript just for this
I don't think I've mentioned it here, but I'm one of the core members of Purdue Perl Mongers, which I've wrangled into a SIG of Greater Lafayette Open Source Symposium (#GLOSSY) to try to reach out to others in the Open Source community.

I was going to talk about DBIx::Class and how it connects to Dancer, but I haven't learned nearly enough about DBIx::Class to talk about it, and didn't have enough open days to come up with a decent presentation, so I punted.

Thus Starship Mongers!

It's a variation on "Lightning Talks", which give speakers a strict five-minute window, but because we're a small group, I decided to add a wrinkle: "Everybody Talks! No one quits!"

(The next part of the quote seems a little too tough for a user group.)

This means that I intend that everyone should talk for five minutes on something. Doesn't have to be Perl. Doesn't have to be programming, or computing, or open source. Just has to be something you are interested in or have questions about. (But, remember your audience.)

I hope that this will charge up the group, bring up ideas for upcoming meetings. If nothing else, it'll give me time to get up to speed on DBIC.


Diagnosing A Problem: OddMuse

I work in a lab in a large research-centered university. We use a wiki to serve as our lab notebook where we keep notes about the samples that go through. We're also a Perl shop, so we went with a Perl-based wiki named OddMuse (a fork of UseMod). This has been our platform of choice for nearly a decade.

Today, it was reported that a few pages would hang during loading. They gave up after 300, as we have a 5-minute timeout in our Apache config. I shame myself by saying that I went to the OddMuse IRC channel before I looked at /var/log/html/error_log, but that is what happened. The error log reported:  [Mon Apr 04 13:11:54 2016] [error] [client] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed. I'm not strong in my Apache Fu, but I'm pretty sure this means that we hit the timeout, but it doesn't really say why we hit the timeout.

Which brings up a weirdness. Imagine is the page in question. You can get the page in it's full glory, before it's turned into HTML and spat out, at, and that page always loads fast.

I "solved" the issue by editing and saving the file. I still don't know what's going on so that OddMuse can handle the data in raw form but cannot convert it to HTML. I am currently going down two roads of thought. The first is that there was a filesystem issue. We're working on a filesystem that is amazing in it's redundancy, size and the sheer number of connected nodes, but on occasion, we hit points where it falls down, giving us a several minute wait for commands such as ls or clear to run.

The other thought relates to how we actually use OddMuse. We wanted to have a front-end that behaved nearly like Word, so we use CKEditor and save HTML instead of wiki markup. At first, we saved samples in groups of up to 10, writing them to an unordered HTML list, but now we're pushing 400. The stub of each wiki page is created programmatically, and I am thinking that the higher numbers might be more than it can take.

This, I guess, gives me a thing I can test. Write a thing that starts with, say, 100 elements in a list, then builds it up until the page doesn't render. I can do that. And I will do that tomorrow, because it's after 5pm today.