Quick history: I started out as a vi man, having the comical "how do I save and exit?" issues with emacs that I see lots of people complain about for vim. After college, my first job's standard editor was UltraEdit. In the lab, I experimented with KomodoEdit, have been a Sublime Text 2 user, and right now, on both Windows and Linux, I'm using Visual Studio Code.
As well as vim. There are occasional things VS Code can't do, or at least can't do with the packages I know about. Line sorting, for one. I like to have my modules sorted; it helps me to see if the one I want is called.
Similarly, over time, I have developed a fondness for specific fonts in my editor. I cannot go through a long history of preference, but I can say that variations of Droid Sans Mono with modifications for a dotted or slashed zero do more easily distinguish them from capital O are normally what I run in terminals and editors.
Consider the code from yesterday's post on Sentiment Analysis. There are a couple places where there are skinny arrows (->), fat arrows (=>) and greater-than-and-equal signs (>=). Look specifically at line 21 for skinny arrows and greater-and-equal. Pretty, isn't it? I will also point out that line 21 also shows the dotted 0 and how easy it is to distinguish it from letters.
Also look at the logical AND (&&) on like 36.
These are just in-the-editor changes; the code from yesterday's post is copied from this editor.
I have it as my font for the HexChat IRC client, which supports ligatures, and Git for Windows Bash, Windows Subsystem for Linux and PowerShell terms on Windows, which don't, but the font still looks good. I noticed an issue where it swallowed the equal sign in Gnome Terminal, so there I'm back to Droid Sans Mono Slashed, but that might have been a temporary issue.
Hrmm. Yesterday, I wrote on an Azure API. Today, I'm praising a Visual Studio-related editor. Tomorrow, I might get into the Windows Subsystem for Linux.
I can write a tool that goes through all old tweets and deletes ones that don't pass criteria, but I prefer to get out ahead of the issue and leave the record as it is.
And, as a later pass, one can pull a corpus, use a module like Algorithm::NaiveBayes and make your own classifier, rather than using Microsoft Research's Text Analytics API or another service. I was somewhere along that process when the hosting died, so I'm not bringing it here.
I was kinda under the thrall of Building Maintainable Software, or at least the first few chapters of it, so I did a few things differently in order to get their functions less than 15 lines, but the send_tweet function didn't get the passes it'd need, and I could probably give check_status some love, or perhaps even roll it into something like WebService::Microsoft::TextAnalytics. In the mean time, this should allow you to always tweet on the bright side of life.
#!/usr/bin/env perl
use feature qw{ postderef say signatures } ;
use strict ;
use warnings ;
use utf8 ;
no warnings qw{ experimental::postderef experimental::signatures } ;
use Carp ;
use Getopt::Long ;
use IO::Interactive qw{ interactive } ;
use JSON ;
use LWP::UserAgent ;
use Net::Twitter ;
use YAML qw{ LoadFile } ;
my $options = options() ;
my $config = config() ;
if ( check_status( $options->{ status }, $config ) >= 0.5 ) {
send_tweet( $options, $config ) ;
}
else { say qq{Blocked due to negative vibes, man.} }
exit ;
sub send_tweet ( $options, $config ) {
my $twit = Net::Twitter->new(
traits => [ qw/API::RESTv1_1/ ],
consumer_key => $config->{ twitter }{ consumer_key },
consumer_secret => $config->{ twitter }{ consumer_secret },
ssl => 1,
) ;
my $tokens = $config->{ twitter }{ tokens }{ $options->{ username } } ;
if ( $tokens->{ access_token }
&& $tokens->{ access_token_secret } ) {
$twit->access_token( $tokens->{ access_token } ) ;
$twit->access_token_secret( $tokens->{ access_token_secret } ) ;
}
if ( $twit->authorized ) {
if ( $twit->update( $options->{ status } ) ) {
say { interactive } $options->{ status } ;
}
else {
say { interactive } 'FAILED TO TWEET' ;
}
}
else {
croak( "Not Authorized" ) ;
}
}
sub check_status ( $status, $config ) {
my $j = JSON->new->canonical->pretty ;
my $key = $config->{ microsoft }{ text_analytics }{ key } ;
my $id = 'tweet_' . time ;
my $object ;
push @{ $object->{ documents } },
{
language => 'EN',
text => $status,
id => $id,
} ;
my $json = $j->encode( $object ) ;
my $api = 'https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment' ;
my $agent = LWP::UserAgent->new ;
my $request = HTTP::Request->new( POST => $api ) ;
$request->header( 'Ocp-Apim-Subscription-Key' => $key ) ;
$request->content( $json ) ;
my $response = $agent->request( $request ) ;
if ( $response->is_success ) {
my $out = decode_json $response->content ;
my $doc = $out->{ documents }[ 0 ] ;
return $doc->{ score } ;
}
else {
croak( $response->status_line ) ;
}
return 1 ;
}
sub config () {
my $config ;
$config->{ twitter } = LoadFile( join '/', $ENV{ HOME }, '.twitter.cnf' ) ;
$config->{ microsoft } = LoadFile( join '/', $ENV{ HOME }, '.microsoft.yml' ) ;
return $config ;
}
sub options () {
my $options ;
GetOptions(
'help' => \$options->{ help },
'username=s' => \$options->{ username },
'status=s' => \$options->{ status },
) ;
show_usage( $options ) ;
return $options ;
}
sub show_usage ($options) {
if ( $options->{ help }
|| !$options->{ username }
|| !$options->{ status } ) {
say { interactive } <<'HELP';
Only Positive Tweets -- Does text analysis of content before tweeting
-u user Twitter screen name (required)
-s status Status to be tweeted (required)
-h help This screen
HELP
exit ;
}
}
__DATA__
.microsoft.yml looks like this
---
text_analytics:
key: GENERATED_BY_MICROSOFT
.twitter.cnf looks like this
---
consumer_key: GO_TO_DEV.TWITTER.COM
consumer_secret: GO_TO_DEV.TWITTER.COM
tokens:
your_username:
access_token: TIED_TO_YOU_AS_USER_NOT_DEV
access_token_secret:TIED_TO_YOU_AS_USER_NOT_DEV
I cover access_tokens in https://varlogrant.blogspot.com/2016/07/nettwitter-cookbook-how-to-tweet.html
As discussed last time, I had been using my GitHub Pages space as a list to my Repos. I had been considering moving my blogging from here to ... something else, and this looked like an interesting concept.
I've always developed the web with a smart server side, and I've known from the start that this makes you very vulnerable, so I do like the idea of writing markdown, committing it to the repo and having the system take care of it from there. So, that's a win.
But, as far as I can tell, I've followed the "this is how you make an Atom feed" magic and get no magic from it, and that, more than webhooks triggering on push, is how you start putting together the social media hooks that make blogging more than writing things down in a notebook. Which is a lose.
So, I'm not 100% happy with GitHub Pages and Jekyll, but the good thing is that I can write and commit from anywhere. If I used Statocles or another static site generator, I'd have to have that system running on whatever computer I blog from, or transfer it to there.
I would guess that, if I had the whole deal running on any of my machines, some of the small things would work better, but so far, getting a setup that displays pages exactly like github.io on localhost has been less that working, And, I would've like to have this as a project page, jacoby.github.io/blog, so my personal page could be more of a landing page, but alas.
And, ultimately, I do want to not have myblog.<service>.com, but rather myblog.<me>.com. Every time I think about it, I think about the tools I'd build on it, rather than the billboard for me, but
Early in my playing with Bootstrap, I made this as a way to begin to play with it. It is about as simple a GitHub API to LWP to Template Toolkit to Bootstrap tool as I could have written. I'm now thinking about how to make this prettier, but for now, it's what I use to generate, when I remember to redo my GitHub page. Learn and enjoy.
I'm always curious about how people customize their prompt. I put name and machine in, with color-coding based on which machine, because while spend most of my time on one or two hosts, I have reason to go to several others.
I work in a sub-basement, and for most of my work days, I couldn't tell you if it was summer and warm, winter and frozen, or spring and waterlogged outside, so one of the things I learned to check out is current weather information. I used to put the temperature on the front panel of our HP printer, but we've moved to Xerox.
Currently, I use DarkSky, formerly forecast.io. I know my meteorologist friends would recommend more primary sources, but I've always found it easy to work with.
I had this code talking to Redis, but I decided that it was an excuse to use Redis and this data was better suited for storing in YAML, so I rewrote it.
store_temp
#!/usr/bin/env perl
# stores current temperature with YAML so that get_temp.pl can be used
# in the bash prompt to display current temperature
use feature qw{ say state } ;
use strict ;
use warnings ;
use Carp ;
use Data::Dumper ;
use DateTime ;
use IO::Interactive qw{ interactive } ;
use JSON ;
use LWP::UserAgent ;
use YAML::XS qw{ DumpFile LoadFile } ;
my $config = config() ;
my $url
= 'https://api.darksky.net/forecast/'
. $config->{apikey} . '/'
. ( join ',', map { $config->{$_} } qw{ latitude longitude } ) ;
my $agent = LWP::UserAgent->new( ssl_opts => { verify_hostname => 0 } ) ;
my $response = $agent->get($url) ;
if ( $response->is_success ) {
my $now = DateTime->now()->set_time_zone('America/New_York')->datetime() ;
my $content = $response->content ;
my $forecast = decode_json $content ;
my $current = $forecast->{currently} ;
my $temp_f = int $current->{temperature} ;
store( $now, $temp_f ) ;
}
else {
say $response->status_line ;
}
exit ;
sub store {
my ( $time, $temp ) = @_ ;
say {interactive} qq{Current Time: $time} ;
say {interactive} qq{Current Temperature: $temp} ;
my $data_file = $ENV{HOME} . '/.temp.yaml' ;
my $obj = {
curr_time => $time,
curr_temp => $temp,
} ;
DumpFile( $data_file, $obj ) ;
}
# ======================================================================
# Reads configuration data from YAML file. Dies if no valid config file
# if no other value is given, it will choose current
#
# Shows I need to put this into a module
sub config {
my $config_file = $ENV{HOME} . '/.forecast.yaml' ;
my $output = {} ;
if ( defined $config_file && -f $config_file ) {
my $output = LoadFile($config_file) ;
$output->{current} = 1 ;
return $output ;
}
croak('No Config File') ;
}
And this is the code that reads the YAML and prints it, nice and short and ready to be called in PS1.
get_temp
#!/usr/bin/env perl
# retrieves the current temperature from YAML to be used in the bash prompt
use feature qw{ say state unicode_eval unicode_strings } ;
use strict ;
use warnings ;
use utf8 ;
binmode STDOUT, ':utf8' ;
use Carp ;
use Data::Dumper ;
use YAML::XS qw{ LoadFile } ;
my $data_file = $ENV{HOME} . '/.temp.yaml' ;
my $output = {} ;
if ( defined $data_file && -f $data_file ) {
my $output = LoadFile($data_file) ;
print $output->{curr_temp} . '°F' || '' ;
exit ;
}
croak('No Temperature File') ;
I thought I put a date-diff in there. I wanted to be able say 'Old Data' if the update time was too long ago. I should change that.
I should really put the config files in __DATA__ for show, because it will show that the location is hard-coded. For a desktop or server, that makes sense; it can only go as far as the power plug stretches. But, for other reasons, I adapted my bash prompt on my Linux laptop, and I recently took it to another state, so I'm thinking more and more that I need to add a step, to look up where I am before I check the temperature.
store_geo_temp
#!/usr/bin/env perl
# Determines current location based on IP address using Google
# Geolocation, finds current temperature via the DarkSky API
# and stores it into a YAML file, so that get_temp.pl can be
# in the bash prompt to display current local temperature.
use feature qw{ say state } ;
use strict ;
use warnings ;
use utf8 ;
use Carp ;
use Data::Dumper ;
use DateTime ;
use IO::Interactive qw{ interactive } ;
use JSON::XS ;
use YAML::XS qw{ DumpFile LoadFile } ;
use lib $ENV{HOME} . '/lib' ;
use GoogleGeo ;
my $json = JSON::XS->new->pretty->canonical ;
my $config = config() ;
my $location = geolocate( $config->{geolocate} ) ;
croak 'No Location Data' unless $location->{lat} ;
my $forecast = get_forecast( $config, $location ) ;
croak 'No Location Data' unless $forecast->{currently} ;
say {interactive} $json->encode($location) ;
say {interactive} $json->encode($forecast) ;
my $now = DateTime->now()->set_time_zone('America/New_York')->datetime() ;
my $current = $forecast->{currently} ;
my $temp_f = int $current->{temperature} ;
store( $now, $temp_f ) ;
exit ;
# ======================================================================
# Reads configuration data from YAML files. Dies if no valid config files
sub config {
my $geofile = $ENV{HOME} . '/.googlegeo.yaml' ;
croak 'no Geolocation config' unless -f $geofile ;
my $keys = LoadFile($geofile) ;
my $forecastfile = $ENV{HOME} . '/.forecast.yaml' ;
croak 'no forecast config' unless -f $forecastfile ;
my $fkeys = LoadFile($forecastfile) ;
$keys->{forecast} = $fkeys->{apikey} ;
croak 'No forecast key' unless $keys->{forecast} ;
croak 'No forecast key' unless $keys->{geolocate} ;
return $keys ;
}
# ======================================================================
# Takes the config for the API keys and the location, giving us lat and lng
# returns the forecast object or an empty hash if failing
sub get_forecast {
my ( $config, $location ) = @_ ;
my $url
= 'https://api.darksky.net/forecast/'
. $config->{forecast} . '/'
. ( join ',', map { $location->{$_} } qw{ lat lng } ) ;
my $agent = LWP::UserAgent->new( ssl_opts => { verify_hostname => 0 } ) ;
my $response = $agent->get($url) ;
if ( $response->is_success ) {
my $content = $response->content ;
my $forecast = decode_json $content ;
return $forecast ;
}
return {} ;
}
sub store {
my ( $time, $temp ) = @_ ;
say {interactive} qq{Current Time: $time} ;
say {interactive} qq{Current Temperature: $temp} ;
my $data_file = $ENV{HOME} . '/.temp.yaml' ;
my $obj = {
curr_time => $time,
curr_temp => $temp,
} ;
DumpFile( $data_file, $obj ) ;
}
A few things I want to point out here. First off, you could write this with Getopt::Long and explicit quiet and verbose flags, but Perl and IO::Interactive allow me to make this context-specific and implicit. If I run it myself, interactively, I am trying to diagnose issues, and that's when say {interactive} works. If I run it in crontab, then it runs silently, and I don't get an inbox filled with false negatives from crontab. This corresponds to my personal preferences; If I was to release this to CPAN, I would likely make these things controlled by flags, and perhaps allow latitude, longitude and perhaps API keys to be put in that way.
But, of course, you should not get in the habit, because then your keys show up in the process table. It's okay if you're the only user, but not best practice.
This is the part that's interesting. I need to make it better/strong/faster/cooler before I put it on CPAN, maybe something like Google::Geolocation or the like. Will have to read some existing Google-pointing modules on MetaCPAN before committing. Geo::Google looks promising, but it doesn't do much with "Where am I now?" work, which is exactly what I need here.
Google's Geolocation API works better when you can point to access points and cell towers, but that's diving deeper than I need; the weather will be more-or-less the same across the widest accuracy variation I could expect.
GoogleGeo
package GoogleGeo ;
# interfaces with Google Geolcation API
# https://developers.google.com/maps/documentation/geolocation/intro
use feature qw{say} ;
use strict ;
use warnings ;
use Carp ;
use Data::Dumper ;
use Exporter qw(import) ;
use Getopt::Long ;
use JSON::XS ;
use LWP::Protocol::https ;
use LWP::UserAgent ;
our @EXPORT = qw{
geocode
geolocate
} ;
my $json = JSON::XS->new->pretty ;
my $agent = LWP::UserAgent->new ;
sub geocode {
my ($Google_API_key,$obj) = @_ ;
croak unless defined $Google_API_key ;
my $url = 'https://maps.googleapis.com/maps/api/geocode/json?key='
. $Google_API_key ;
my $latlng = join ',', $obj->{lat}, $obj->{lng} ;
$url .= '&latlng=' . $latlng ;
my $object = { latlng => $latlng } ;
my $r = $agent->post($url) ;
if ( $r->is_success ) {
my $j = $r->content ;
my $o = $json->decode($j) ;
return $o ;
}
return {} ;
}
sub geolocate {
my ($Google_API_key) = @_ ;
my $url = 'https://www.googleapis.com/geolocation/v1/geolocate?key='
. $Google_API_key ;
my $object = {} ;
my $r = $agent->post( $url, $object ) ;
if ( $r->is_success ) {
my $j = $r->content ;
my $o = $json->decode($j) ;
return {
lat => $o->{location}{lat},
lng => $o->{location}{lng},
acc => $o->{accuracy},
} ;
}
return {} ;
}
'here' ;
If this has been helpful or interesting to you, please tell me so in the comments.
A Delorean with a Perl-powered center column and the cutest little Flux Capacitor on the dashboard. Oh, the wonders you can see at a developer conference.
"A man's got to know his limitations."
That's a line from Dirty Harry Callahan in Magnum Force, but it really described my planning for the Perl Conference. Once the calendar was up, I went in, first and foremost thinking "What are skills I need to learn?"
One crucial skill is version control. It's difficult to add to my main workflow, as I develop in production. (I live in fear.) But I'm increasingly adding it to my side projects. It is especially part of the process for maintaining the site for Purdue Perl Mongers, as well as aspects of HackLafayette, but beyond certain basics, I just didn't know much about how to use Git and version control to improve my projects. I learned how CPAN Testers tests your code on many platforms after you upload to CPAN, and how Travis-CL and Appveyor test against Linux, macOS and Windows after pushing to GitHub, but how track changes, align issues with branches, etc., are all new to me. So, I started Wednesday with Genehack and Logs Are Magic: Why Git Workflows and Commit Structure Should Matter To You. (Slides) I fully expect to crib ideas and aliases from this talk for some time to come.
There was a talk at a local software group featuring a project leader from Amazon on the technology involved with Alexa, which involved a lot of how this works, going from speech-to-text to tokenization and identifying of crucial keywords -- "Alexa, have Domino's order me a Pizza" ultimately boiling down to "Domino's" and "Pizza" -- and proceeding from there. It gave a sense of how Amazon is taking several hard problems and turning them into consumer tools.
What came very late in the talk is how to interface my systems with Amazon's "Talking Donkey", and I had a few conversations where we talked about starting the day with "Alexa, what's fresh hell is this?" and getting back a list of systems that broke overnight, but I lacked a strong idea of what is needed to make my systems interact with the Alexa system.
But, thankfully, Jason Terry's Amazon, Alexa and Perl talk covered this, albeit more in the "Turn my smart lights on" sense than in the "Tell me what my work systems are doing" sense. Still, very much something I had been interested in.
But, as I implied, Amazon does a lot of heavy lifting with Alexa, getting it down to GETs and and POSTs against your API. If you're running this from home, where you have a consistent wired connection, this works. But, if you're running this, for example, in your car, you need it to be small, easy, and self-contained. Chris Prather decided to MAKE New Friends with a Raspberry Pi 3. This was a case of Conference-Driven Development, and he didn't have it ready to demonstrate at this time.
I've been trying to move my lab to the New Hotness over time, and because I will have to tie things back to existing systems, I have avoided learning too much in order to make it work. Joel Berger presented Vue.js, Mojolicious, and PostgreSQL chat in less than 50 lines. I've heard good things about Vue.js, which I heard from the project head was the parts he needed from Angular without the stuff he didn't use. (RFC Podcast) I use jQuery and Template and not much more, so the "only what you need" aspect sounds good to me. I fully expect to bug Joel on Twitter and IRC about this and other things I need to do with Mojolicious over the coming months. (Slides)
But, of course, not every talk needs to speak directly to my short-term needs. Stevan Little has been trying to add an object system to the Perl 5 for quite some time, and in Hold My Beer and Watch This, he talks about Moxie, his latest attempt.
OK, the Moxie talk was in the same room as the Mojolicious talk and this next one. I could have gone to one of the others, but I decided against. Oh well. I would put "do more object-oriented development" as a thing to learn, so I was glad to hear it.
The last full talk was The Variable Crimes We Commit Against JavaScript by Julka Grodel. I would say that I code Javascript half as much as I code Perl, but twice as much as I code in other languages. I knew certain parts, like how let gives lexical scoping like Perl's my. I had heard about let from my JS Twitter list, as well as from Matt Trout's talk, but there were certainly things I didn't know.
And, actually, things I still don't. I had technical difficulties with my laptop, and if I could've worked it out that day, I would've tried to set up a lightning talk. Alas, not only did it not work -- in the end, I swapped out the WiFi, and I think if I switched back, it'd be sane again -- I missed a lot of her talk about arrow functions, which piqued the interest of Perl's functional programming guru, Mark Jason Dominus. (In a not-surprising addition to my limitations, I still haven't gotten to the end of Higher Order Perl.) Anyway, I believe that, after I finish this post, I will go back and watch this again.
After this, there were Lightning Talks, interspersed with ads for various Perl Mongers groups, and urges for you to start a Perl Mongers group. I am likely going to do another post, with these talks from all three days, but to end this one, I'll show off one where M. Allan Noah talks about the SANE Project and how he reverse engineers scanner output to allow him to support scanners without vendor input.
Lack of documentation is a limitation, but knowing the limitation does not mean you have to stop, and lack of documentation will not stop him. We should all draw inspiration from that.
Now that I'm two weeks past the end, most of the talks are up, and I would like to commend the organizers. Conference organization is hard, and this venue made it harder. The US Patent and Trademark Office is a great place, but there are aspects that they as a venue wanted secured and I would've preferred to be more open, but it was a beautiful venue and I be glad to return.
But, the thing I'd like to commend them most on is the high quality and fast turnaround on talk videos. The audio is good, the video is clear and well-lit, and the slides take precedence over the speaker in framing. It's everything I want in a conference video.