Wednesday, November 25, 2009

JRuby and JNI and OS X.

Like practically everyone I sat down a few years back and picked up enough Ruby to write an address-book app in Rails. Nice enough language, a few nice features[0]. Still at the end of the day... YADTL, so ultimately YAWN[1].

All this is to say that I found myself yesterday facing the need to convert a trivial rails app into a java servlet, with the added complication that it uses a third-party C++ library to do the bulk of the work.

JRuby is a trivial install, and the application in question didn't make use of continuations, so rip out all the Rails code, and replace with a simple loop that takes requests from the cmdline instead of a POST parameter, and we're in business.

Well up to the point where it falls over because it needs the C++ library. Here's the first real hurdle. Apple's JDK for OS X is compiled as a 64-bit binary which can't link against 32-bit libraries[2], and by default the C compiler only produces 32-bit libraries.

$ file /usr/local/lib/libcrfpp.dylib 
/usr/local/lib/libcrfpp.dylib: Mach-O dynamically linked shared library i386
A quick check on google yields these pages that suggest that I need to add -arch i386 -arch x86_64 to the CXXFLAGS and LDFLAGS. So after modifying the Makefile to compile the library with the above flags and install it in /opt/local we have:
$ file /opt/local/lib/libcrfpp.dylib 
/opt/local/lib/libcrfpp.dylib: Mach-O universal binary with 2 architectures
/opt/local/lib/libcrfpp.dylib (for architecture i386): Mach-O dynamically linked shared library i386
/opt/local/lib/libcrfpp.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64

Then we need to compile the SWIG generated JNI wrapper, which will also require the above parameters, but also needs to link against Apples JNI installation. Back to Google which provides this page from Apple that describes how to compile JNI on Darwin, which provides the include path and an additional link flag.

The command lines to compile the wrapper in question becomes:

g++ -arch i386 -arch x86_64 -c -I/System/Library/Frameworks/JavaVM.framework/Headers CRFPP_wrap.cxx
g++ -arch i386 -arch x86_64 -dynamiclib -L/opt/local/lib -lcrfpp -o libCRFPP.jnilib CRFPP_wrap.o -framework JavaVM

Wanting to avoid calling System.loadLibrary from ruby code I wrote a quick wrapper class that could make the call in a static block:

package crfppwrapper;

import org.chasen.crfpp.Tagger;

public class CRFPP {
    public static Tagger newTagger(String args) {
        return new Tagger(args);
    }

    static {
        System.loadLibrary("CRFPP");
    }
}
Finally modify the relevant ruby file to link to the jar file:
require 'java'
require 'lib/CRFPP-0.53.jar'
and call the wrapper:
crfppwrapper.CRFPP.newTagger("...");

Finally modify the toplevel jruby script to pass the necessary java parameters to the jvm:

#!/usr/bin/env jruby -J-Djava.library.path=/opt/local/lib -J-cp .:lib/CRFPP-0.53.jar -w

And it worked. No more Rails; migrated to jruby; and ready to bundle into a trivial servlet that will reintroduce the POST parameter to the mix.

[0] The implicit closure parameter to every function and an elegant brace syntax for creating the closures themselves does lend itself to some wonderfully elegant idioms.

[1] Of course if you were coming from J2EE you were probably more than happy to exchange static-typing for configuration-by-convention.

[2] And all of a sudden I'm having flashbacks to horror of IRIX binary toolchain management in the late 90's.

Tuesday, September 08, 2009

ldd equivalent under Darwin (OS X)

ldd is an incredibly useful command under linux. It lists the shared-libraries required by an executable, and exactly which .so file each dependency is currently resolved to.

A similarly useful tool is od, which permits inspection of object files, especially executable and shared-libraries, extracting things like the string-table, symbol-table, and various headers and sections.

Under Darwin these tools are combined in otool. ldd can be duplicated by otool -L.

Wednesday, August 26, 2009

Gov2.0 and Open-Source - a response

Cross-posted from Data.gov and lessons from the open-source world at the Gov2.0 Taskforce blog.

My biggest worry is that the government's response to this initiative will be the announcement of some $multi-million grand-gesture. Big press-conference; Minister announcing the 'grand vision'; and the possible benefits we could see, lost in the maze that is large-scale government procurement.

The key insight of CatB is the extent to which redundancy is a benefit in exploratory development.

For the moment, we have no idea of the correct model for Gov2.0 - we have some understanding of what has worked outside of government, and a few promising avenues of approach, but no actual answers.

So I think we want to recommend that different Agencies experiment with different approaches and that the OIC be tasked with:

  1. Examining the success/failure of the different attempts, and eventually start to help agencies improve the success rate.
  2. Ensuring that the legal and regulatory requirements for aggregation and interoperability of/between these different services is standardised, as these are the issues that will derail bazaar development.
  3. Acting as a central clearing house where agencies/organisations/individuals can choose to self-publish interface descriptions, custom schema, metadata element schemes, vocabularies etc
  4. Providing a collaboration and mediation service to allow the reconciliation of conflicting interface/schema/scheme/vocab's.

The result would hopefully be a myriad of exploratory projects, some of which would fail, most of which would be ho-hum, but many of which would succeed.

The OIC would act as an institutional memory, learning and recording the lessons learnt; an institutional coordinator, making sure that people wanting to aggregate/integrate the different data-sources aren't forbidden from doing so; and an institutional mediator, assisting the different projects in finding and working together when they would like to.

Please post any comments to the gov2.0 site listed above, not here

Monday, July 13, 2009

Google’s new OS could hit Microsoft where it hurts

Quoted in a blog post by Andy Goldberg, Andy provides a quote that I suspect a lot of Microsoft sycophants will be telling themselves over the next year:

"Google may or may not have the experience and capability of actually producing an operating system and getting it deployed," he said. "It may not realise how hard it is."

Anyone who takes this line should remind themselves: Chrome OS is not an operating system! - it is browser-based windowing system running on top of an open-source OS; and Google can definitely handle that. Moreover, google has spent the better part of the past decade doing OS design and implementation. It's just that they haven't been selling it, they have been using it internally.

Friday, July 10, 2009

How to Write an Equality Method in Java

Martin Odersky, Lex Spoon, and Bill Venners have written an article on How to Write an Equality Method in Java. Given the amount of traffic my post on the same topic has attracted, I thought I might take a look. Their solution is the best I have seen, and well worth a look, but the article suffers the same problem as every article I've read on this topic: it fails to properly define equals().

A Java equals() method should implement a test for bisimulation.

To me, the remarkable thing about the Artima article is that it provides and example where the traditional getClass() approach I have previously recommended fails to implement bisimulation. The good-news is that this is a rather artificial use-case, and as such isn't going to be causing many bugs. The bad-news is that the fix requires a helper method defined at the topmost concrete class, and the need for it isn't intuitive.

Anyway, if you program in Java you need to read and understand this article.

Thursday, July 02, 2009

Scala and Various Things.

Not quite ready to move across yet - however capturing some links to make life that little bit easier when I do. I'm hoping these approaches will work as well for Elmo/OTM as they do for Hibernate.

I'll be updating this with any more links I want to save for a sunny day.

A couple more links worthy of rereading:

Wednesday, June 17, 2009

Looking for JSON, ReST, (and in-memory RDF) frameworks

Currently writing a number of small web-services to do various informatics tasks (more detailed post to come). Fortunately I'm not the one having to deal with 3rd-party SOAP apis! Still I do need to do various XML and JSON parsing, and not having addressed the latter before I've gone looking for libraries.

Currently I am about to start using Jackson, but was wondering if anyone had any warnings, advice, or recommended alternatives? In the course of looking at what was out there I have also come across Restlet, a ReST framework that seems like it is well worth the time to figure out and deploy, so I will probably be doing that soon as well, any warnings or advice on this will be welcome.

One of the nice things about Restlet is its support for RDF. Granted it doesn't support querying, and the terminology in the interface is a bit confused, but it does use its native Resource interface for URIRefs, so it should integrate well. OTOH, if it does prove useful as a ReST framework, I can see myself writing a quick Sesame or Mulgara extension, as there is only so much you can do with RDF before you need a query and/or data-binding interface.

Thursday, June 04, 2009

Deploying to tomcat using maven

This is a note to capture the process I am currently using to build and deploy a basic web application to tomcat for development.

  1. Create simple web app using archetype.
    mvn archetype:create ... -DarchetypeArtifactId=maven-archetype-webapp
  2. Add server to ~/.m2/settings.xml
      <servers>
        <server>
          <id>local-tomcat</id>
          <username>username</username>
          <password>password</password>
        </server>
      </servers>
    
  3. Add server to toplevel pom.xml
      <build>
        ...
        <plugins>
          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>tomcat-maven-plugin</artifactId>
            <version>1.0-beta-1</version>
            <configuration>
              <server>local-tomcat</server>
            </configuration>
          </plugin>
        </plugins>
      </build>
    
  4. compile, test, deploy, and subsequently redeploy
      $ mvn compile
      $ mvn test
      $ mvn tomcat:deploy
      $ mvn tomcat:redeploy
    

Of course what I would dearly love to know is how you configure a server url in the settings.xml file. The documentation I can find (http://mojo.codehaus.org/tomcat-maven-plugin/configuration.html) describes how to use a custom url, and how to configure authentication information. What it doesn't do is explain how you can do both at the same time, and all my attempts have resulted in XML validation errors when trying to run maven --- if I figure it out I'll update this post.

Wednesday, May 27, 2009

ANTLR: an exercise in pain.

Ok, so for my current project I need to either build heuristic or machine learning based fuzzy parser. As someone who has written numerous standard parsers before, this qualifies as interesting. Current approaches I'm considering include cascading concrete grammars; stochastic context-free grammars; and various forms of hidden-markov-model based recognisers. Whatever approach ends up working best, the first stage for all is a scanner.

So I start building a jflex lexer, and make reasonable progress when I find out that we are already using ANTLR for other projects, so I should probably use it as well. Having experienced mulgara's peak of 5 different parser generators - this does eventually become ridiculous - I was more than willing to use ANTLR. Yes it's LL, and yes I have previously discussed my preference for LR; but, it does support attaching semantic actions to productions, so my primary requirement of a parser-generator is met; and anyway, it has an excellent reputation, and a substantial active community.

What I am now stunned by, is just how bad the documentation can be for such a popular tool. One almost non-existent wiki; a FAQ; and a woefully incomplete doxygen dump, does not substitute for a reference. ANTLR has worse documentation than sablecc had when it consisted of an appendix to a masters thesis!

My conclusion: If you have any choice don't use ANTLR. For Java: if you must use LL I currently recommend JavaCC; if you can use an LALR parser do so, my current preference is Beaver.