Cookie Notice

As far as I know, and as far as I remember, nothing in this page does anything with Cookies.
Showing posts with label web apps. Show all posts
Showing posts with label web apps. Show all posts

2015/06/03

Testing AJAX APIs with Perl

In my lab, we have an AJAX-laden web tool which loads a certain JSON API on page load. It was judged that what we had was too slow, so I created a program that wrote that JSON to a static file on regular intervals. Problem with that, of course, is that changes to the data would not show up in the static file until the next scheduled update.

So, we created a third version, which checks the database for checksum, and if it changes, it regenerates the file and sends the data. Otherwise, it opens the file and sends the data.

I tested with Chrome Dev Tools, which told a bit of the story, but at the scale where it's closer to anecdotes than data. I wanted to go into the hundreds of hits, not just one. I pulled out Benchmark, which told a story, but wasn't quite what I wanted. It started the clock, ran it n times, then stopped the clock, while I wanted to get clock data on each GET.

I also realized I needed to test to be sure that the data I was getting was the same, so I used Test::Most to compare the object I pulled out of the JSON. That was useful, but most useful was the program I wrote using Time::HiRes to more accurately grab the times, then use Statistics::Basic and List::Util to take the collected arrays of sub-second response times and show me how much faster it is to cache.

And it is fairly significant. The best and worst performance were comparable, but the average case has the cached version being about twice as fast, and using the static file being about 7 times faster. With, of course, the same problems.

If I wasn't about to take time out of the office, I'd start looking into other methods to get things faster. Good to know, though, that I have the means to test and benchmark it once I get back next week.
#!/usr/bin/env perl
# my modern perl boilerplate
use feature qw( say ) ;
use strict ;
use warnings ;
# modules used
use LWP::UserAgent ;
use Benchmark qw{ :all } ;
my $agent = LWP::UserAgent->new() ;
my $count = 20 ;
my $base = 'https://example.edu/AJAX/endpoints' ;
my @apis ;
push @apis, '/the_caching_one.cgi' ;
push @apis, '/the_dynamic_one.cgi' ;
push @apis, '/the_static_file.json' ;
timethese( $count , {
'api' => sub { $agent->get( $base . $apis[0] ) } ,
'cache' => sub { $agent->get( $base . $apis[1] ) } ,
'static' => sub { $agent->get( $base . $apis[2] ) } ,
} ) ;
exit ;
__DATA__
Benchmark: timing 20 iterations of api, cache, static...
api: 11 wallclock secs ( 1.14 usr + 0.06 sys = 1.20 CPU) @ 16.67/s (n=20)
cache: 7 wallclock secs ( 1.05 usr + 0.03 sys = 1.08 CPU) @ 18.52/s (n=20)
static: 2 wallclock secs ( 1.20 usr + 0.02 sys = 1.22 CPU) @ 16.39/s (n=20)
view raw b2.pl hosted with ❤ by GitHub
#!/usr/bin/env perl
# my modern perl boilerplate
use feature qw( say ) ;
use strict ;
use warnings ;
# modules used
use List::Util qw{ min max sum } ;
use LWP::UserAgent ;
use Statistics::Basic qw(:all nofill) ;
use Time::HiRes qw( gettimeofday tv_interval ) ;
my $agent = LWP::UserAgent->new() ;
my $count = 20 ;
my $base = 'https://example.edu/AJAX/endpoints' ;
my @apis ;
push @apis, '/the_caching_one.cgi' ;
push @apis, '/the_dynamic_one.cgi' ;
push @apis, '/the_static_file.json' ;
my $times ;
# for each API endpoint being tested, run $count
# times and collect the elapsed time it takes to get said URL
# ensuring that the data is correct is another issue
for my $c ( 1 .. $count ) {
for my $api (@apis) {
my $end = ( split m{/}, $api )[-1] ;
my $url = $base . $api ;
my $t0 = [gettimeofday] ;
$agent->get($url) ;
my $t1 = [gettimeofday] ;
my $elapsed = tv_interval( $t0, $t1 ) ;
push @{ $times->{$end} }, $elapsed * 1000 ;
}
}
say join "\t", qw{ name iter min max mean median } ;
say '-' x 55 ;
for my $api ( sort keys %$times ) {
my @times = @{ $times->{$api} } ;
my $size = scalar @times ;
my $max = max @times ;
my $min = min @times ;
my $omean = mean(@times) ;
my $mean = 0 + $omean->query ;
my $omedian = median(@times) ;
my $median = 0 + $omedian->query ;
say join "\t", $api,
map { sprintf '%4d', $_ } $size, $min, $max, $mean, $median ;
}
say '' ;
say 'All times in milliseconds. Smaller is better' ;
say '' ;
__DATA__
name iter min max mean median
-------------------------------------------------------
pi 20 378 894 610 583
pi.cgi 20 217 886 356 334
pi.json 20 49 171 83 74
All times in milliseconds. Smaller is better
#!/usr/bin/env perl
# this program compares three versions of the PI api for new submissions
# to see if they have the same data. If they don't have the same data
# their benchmarks are not comparable
# my modern perl boilerplate
use feature qw( say ) ;
use strict ;
use warnings ;
# modules used
use LWP::UserAgent ;
use Test::Most ;
use JSON ;
my $agent = LWP::UserAgent->new() ;
my $base = 'https://example.edu/AJAX/endpoints' ;
my @apis ;
push @apis, '/the_caching_one.cgi' ;
push @apis, '/the_dynamic_one.cgi' ;
push @apis, '/the_static_file.json' ;
my $data ;
# for each API endpoint being tested, get the responses
# and store in $data
for my $api (@apis) {
my $end = ( split m{/}, $api )[-1] ;
my $url = $base . $api ;
my $r = $agent->get($url) ;
if ( $r->is_success ) {
my $content = $r->content ;
$data->{$end} = decode_json($content) ;
}
else { say 'ERROR', $end }
}
# compare each endpoint's output with the others, using $done
# to avoid duplication
my $done ;
for my $k1 ( sort keys %$data ) {
for my $k2 ( sort keys %$data ) {
next if $k1 eq $k2 ;
my $k = join ' ', sort $k1 , $k2 ;
next if $done->{$k}++ ;
my $j1 = $data->{ $k1 }{ data } ;
my $j2 = $data->{ $k2 }{ data } ;
cmp_deeply( $j1 , $j2 , 'are equal: ' . $k ) ;
}
}
done_testing() ;
exit ;
view raw test_api.pl hosted with ❤ by GitHub

2012/05/09

Wondering how to proceed with my web application

I don't want to go too deep into the detail because what it actually does is very specific to the lab. There is a lot of data I'm wanting to collect, select, connect and commit to a database. Which is very bog-standard for my web work.

Thing is, it is a lot of state that I want to put together, and I hate putting together lots of state iteratively via repeated web inputs, because each time you go back to the server, you increase time and annoyance and the potential for catastrophe. So, I really want this to be an AJAX-lead mostly-javascript thing.

You want to know the absolute coolest thing about coding Javascript? You can put it all together via Javascript.  You can have a page that looks like <body></body> and have absolute control over everything on that page. It is true that, when you do that, this means that people who run without Javascript get absolutely nothing, and this is a problem, but there are times when you can create something in HTML with some small Javascript additions that are not necessary, and there are times when putting it together in Javascript is the whole of the point and re-implementing in HTML and CGI are just in the way.

Except....

We are experimenting with MVC frameworks, very specifically Dancer. We've seen and done enough with Dancer to decide that this is the way we want to go with new web development. In a way, I'm pushed, but in another way, I see this as accepting the change of technology. I don't want my skills to be rooted in the early 90s more than they already are. So, I for one welcome the new MVC overlords. But I'm seeing a Template Toolkit page that says <body> <!-- Insert Cool Stuff Here --> </body> which kinda obviates the purpose of making it dynamic like that in the first place.

Right now, I'm thinking through the data, deciding on the right way to make and modify the database tables to get at it, and mocking up what I want it to look like in straight HTML, but I am getting close to the point where I have to make that decision, and the way isn't clear to me.

2011/04/08

Comments on BoilerWeb 2011 #bweb11

I took a paid vacation yesterday and attended Purdue's BoilerWeb conference. I had a blast. I saw old coworkers and classmates and bosses and had a good ol' time. 

Last year, evidently, it was a one-track thing, but interest was such that there were two tracks. This leads to some minor minor complaints. First was the lack of electricity available. For the first track, this was remedied by an attendee (who shares a job and workspace with one of the organizers) bringing in some power strips from the office, but the orientation of the other room and the modular setup of the floor meant that there was only one jack available at the back of the room. If you're going to follow along, liveblog, tweet and such for an all-day event, you need available power.

But that is my only complaint, and I can only categorize that as a minor gripe. And unlike the Indiana Linuxfest, I felt the printed and online scheduling information for this conference was top notch. (Not a slam, ILF guys. Just a pointer on what can be done better next year.)

First pair of presentations were Know Your Limits,  covering load testing, and jQuery Plugins. I felt that jQuery Plugins was much more in line with my needs. I write Javascript. I try to write jQuery, which I learned about from a co-worker, but I'm much more the JS guy now and I mostly have experience with my own code, so I don't know the best practices, and have a great tendency to spend time writing my own stuff instead of using used and tested existing plugins. Plus, as I figured out, I kinda clog up the existing namespace, which isn't good. The presenter was very effective, and funny. I really liked his intro use of "Final Countdown" by Europe and high-fiving much of the audience, cheesy at it was, and the presentation showed him eating his own dogfood, so to speak.

Next choice was Introduction to Responsive Design vs NanoHUB. I've know people working with NanoHUB for a while" and while I respect them and their work, it is really orthogonal to anything I expect to ever work with. Meanwhile, Responsive Design as presented was all about writing once and presenting from everything from the desktop to the smartphone with three simple design aspects: Flexible Grid, @Media Queries, and Flexible Images. The coolest part of that, the part that works best with me as a developer, is @media queries. Traditionally, that's used to differentiate different style sheets based upon whether you're looking at it on the screen or printing it. Now, you can put that into the stylesheet and not have to define it in several pages, and make distinctions based upon windows size IN THE CSS. So, one stylesheet. That's the win. 

Next was Introduction to Content Strategy, which in a way is fundamental and in a way is meta to this sort of conference. "Content is king" but we spend more time and more money creating the context than the content. We create a framework and say its done, but it isn't done. 

My first degree was in Journalism and Mass Communication, and as part of the degree, we were required to get an internship. There was a requirement that set our program apart at that time: it had to be a paid internship, because if the internship was unpaid (as the majority of journalism internships were), there was the assumption that you would not be used for real work and thus you wouldn't learn anything. Years later, as I was starting Computer Science, there was a course where they brought in people from industry to present what you could do with a CS degree. One of the first presenters mentioned that he had internships and that you should send him resumes. I asked if they were paid internships. Imagine if I had asked if we would be breathing oxygen and using electricity to light the office. That's the kind of response. As a society, we do not value the content creators like we value the interface creators. This talk kinda covered that, and offered some means to run the process to make and keep your content current.

There was a presentation on Food For The Heart, which had Ruby on Rails and data munging aspects I was curious about — need to create a test server and learn some RoR stuff — but instead went to the Class Hacks presentation on Mixable. Best way to describe it is kinda Facebook for Classrooms, and it does connect to Facebook. By the tweeting of people who saw the "Food" presentation, the main take-away is that Fat Secret has a nutrition API. This is something I should take a look at. What has my curiosity is Mixable, which I've started to get into for the community of it. Also, a little bit of inspiration in terms of design. I do web design like a web developer: without a real eye for aesthetics and with a tendency to implement new aspects of CSS only until I understand all the controls and get bored. Anyway, Mixable is cool so far.

The talk on Rapid Development in Groups was like being told to eat my vegetables. I know I should start using revision control. I know I should look into MVCs. I know I should eat my vegetables. Being told is not as useful as one might think. The talk Design Patterns Every Web Developer Should Know sort of fit into that. Design Patterns is a subject I have heard about, but don't really understand well enough to know why I should spend my valuable time diving deeper. I've started to look into the three presented, but the presenter had time and audience to dive further into each of the three, so the opportunity was a bit wasted, to the point where "Understand the Facade patttern" has joined the vegetable list, too.

All in all, I think I'm energized and more curious about the possibilities of the newest generation of web technologies. Great conference, guys. Make it better next year.

2010/08/26

Quick Google Bookmarks Tutorial/Reminder

This is a question that came up on one of the new StackExchange betas, WebApps.
There appears to be two distinct sets of bookmarks maintained by Google. If you visit Google Notebook or Google Bookmarks you get one set. And if you sync Chrome accounts you get another. My question is: how can I get one set of bookmarks maintained on Google's web apps and through Chrome? I want to be able to access the bookmarks I have synced with Chrome from a web app when I'm using a public computer.
I answered it, but I'll hit the topic here, because I can.


This is Google Bookmarks. It seemed to be a response to Delicious, which turns the need for persistent and distributed bookmarks into a social media application. It wasn't imagined to be part of the browser. This was before Chrome was started, or at least before it was far along.

It was also before Google Docs was far along, because rather than working with Google Bookmarks, Google Chrome decided to work with Google Docs for bookmark storage. There's a folder under My Folders marked Google Chrome, which is where it stores your bookmarks.


So, guy on StackExchange, you CAN use a web app to access your bookmarks. It's just which one that's the question. And, for all the things you can say about Chrome, this feature, the easy integration of bookmarks syncing across platforms, is the one that sold me on Chrome. If all your bookmarks are in Google Bookmarks, it's dead easy to export them and import them into via your Bookmarks Manager.

Now, how do I sync bookmarks with a mobile Opera browser?