He Sed, She Sed

Shortly after switching to GitHub, I decided to relicense Infinitum from GNU LGPL to Apache License 2.0. There aren’t really any implications except one: replacing the license and copyright header in every source file.

I’m far from being a Unix expert (more like amateur at best), but I figured sed would be the quickest and easiest way to do this. Sed is a Unix utility for processing text streams, and it allows you to replace string patterns in files. A simple string replacement using sed is quite easy:

This will replace “foo” with “bar”. The “g” indicates that every matching occurrence in file.txt will be replaced, and “-i” means it will do the replacement in place.

In my case, I wanted to find every occurrence of the following string in every Java source file:

And I wanted to replace it with this:

I needed to do a multi-line replacement across a couple hundred files. Feeding lots of files to sed is actually pretty simple:

This command will pass all of the Java files in the current directory (and all sub-directories)  to sed. The reason xargs is needed is because it lets us avoid the “Argument list too long” problem.

In order to replace multiple lines, I needed to use an additional feature of sed. The “c” command lets you replace a range of lines:

There’s a caveat that I have so far ignored. Many Unix utilities have idiosyncrasies or differences between platforms, and sed is no exception. I failed to mention that I was doing this on Mac OSX, whose implementation of sed, as I encountered, had some peculiar quirks. The “-e” in the above command is one such quirk as it’s needed to perform an in-place pattern replacement on OSX.

So, I had a way to process a bunch of files at once and a way to replace multiple lines in a file. Now I just needed to combine these two techniques to replace the license header in all of my project files.

This replaces the range of lines covering the original license with the new license. It works, but the formatting becomes slightly off. That’s because OSX’s sed does not preserve leading whitespace, so the space before each asterisk is stripped. Fortunately, GNU sed does preserve leading whitespace, so building that and using it in place of OSX’s sed solved the problem. Also, GNU sed doesn’t require “-e” for in-place replacement.

Sed is a very handy little tool that every developer should have in his or her toolbelt. Admittedly, I don’t leverage Unix’s utilities nearly enough (although I’m working on it!), but tools like grep, sed, find, and xargs are immensely powerful and pretty simple to use. I think some developers have a tendency to over-engineer solutions for problems that could otherwise be solved using a trivial combination of these tools — I know I have! Of course, it’s helped that I’ve started to do all my programming, both work and play, on Mac. It’s my goal to become a better Unix developer!

Modularizing Infinitum: A Postmortem

In addition to getting the code migrated from Google Code to GitHub, one of my projects over the holidays was to modularize the Infinitum Android framework I’ve been working on for the past year.

Infinitum began as a SQLite ORM and quickly grew to include a REST ORM implementation,  REST client, logging wrapper, DI framework, AOP module, and, of course, all of the framework tools needed to support these various functionalities. It evolved as I added more and more features in a semi-haphazard way. In my defense, the code was organized. It was logical. It made sense. There was no method, but there also was no madness. Everything was in an appropriately named package. Everything was coded to an interface. There was no duplicated code. However, modularity — in terms of minimizing framework dependencies — wasn’t really in mind at the time, and the code was all in a single project.

The Wild, Wild West

The issue wasn’t how the code was organized, it was how the code was integrated. The project was cowboy coding at its finest. I was the only stakeholder, the only tester, the only developer — judge, jury, and executioner. I was building it for my own personal use after all. Consequently, there was no planning involved, unit testing was somewhere between minimal and non-existent, and what got done was at my complete discretion. Ultimately, what was completed any given day, more or less, came down to what I felt like working on.

What started as an ORM framework became a REST framework, which became a logging framework, which became an IOC framework, which became an AOP framework. All of these features, built from the ground up, were tied together through a context, which provided framework configuration data. More important, the Infinitum context stored the bean factory used for storing and retrieving bean definitions used by both the framework and the client. The different modules themselves were not tightly coupled, but they were connected to the context like feathers on a bird.

infinitum-arch

The framework began to grow large. It was only about 300KB of actual code (JARed without ProGuard compression), but it had a number of library dependencies, namely Dexmaker, Simple XML, and GSON, which is over 1MB combined in size. Since it’s an Android framework, I wanted to keep the footprint as small as possible. Additionally, it’s likely that someone wouldn’t be using all of the features in the framework. Maybe they just need the SQLite ORM, or just the REST client, or just dependency injection. The way the framework was structured, they had to take it all or none.

A Painter Looking for a Brush

I began to investigate ways to modularize it. As I illustrated, the central problem lay in the fact that the Infinitum context had knowledge of all of the different modules and was responsible for calling and configuring their APIs. If the ORM is an optional dependency, the context should not need to have knowledge of it. How can the modules be decoupled from the context?

Obviously, there is a core dependency, Infinitum Core, which consists of the framework essentials. These are things used throughout the framework in all of the modules — logging, DI1, exceptions, and miscellaneous utilities. The goal was to pull off ORM, REST, and AOP modules.

My initial approach was to try and use the decorator pattern to “decorate” the Infinitum context with additional functionality. The OrmContextDecorator would implement the ORM-specific methods, the AopContextDecorator would implement the AOP-specific methods, and so on. The problem with this was that it would still require the module-specific methods to be declared in the Infinitum context interface. Not only would they need to be stubbed out in the context implementation, a lot of module interfaces would need to be shuffled and placed in Infinitum Core  in order to satisfy the compiler. The problem remained; the context still had knowledge of all the modules.

I had another idea in mind. Maybe I could turn the Infinitum context from a single point of configuration to a hierarchical structure where each module has its own context as a “child” of the root context. The OrmContext interface could extend the InfinitumContext interface, providing ORM-specific functionality while still inheriting the core context methods. The implementation would then contain a reference to the parent context, so if it was unable to perform a certain piece of functionality, it could delegate to the parent. This could work. The Infinitum context has no notion of module X, Y, or Z, and, in effect, the control has been inverted. You could call it the Hollywood Principle — “Don’t call us, we’ll call you.”

infinitum-context-hierarchy

There’s still one remaining question: how do we identify the “child” contexts and subsequently initialize them? The solution is to maintain a module registry. This registry will keep track of the optional framework dependencies and is responsible for initializing them if they are available. We use a marker class from each module, a class we know exists if the dependency is included in the classpath, to check its availability.

Lastly, we use reflection to instantiate an instance of the module context. I used an enum to maintain a registry of Infinitum modules. I then extended the enum to add an initialize method which loads a context instance.

The modules get picked up during a post-processing step in the ContextFactory. It’s this step that also adds them as child contexts to the parent.

New modules can be added to the registry without any changes elsewhere. As long as the context has been implemented, they will be picked up and processed automatically.

Once this architecture was in place, separating the framework into different projects was simple. Now Infinitum Core can be used by itself if only dependency injection is needed, the ORM can be included if needed for SQLite, AOP included for aspect-oriented programming, and Web for the RESTful web service client and various HTTP utilities.

We Shape Our Buildings, and Afterwards, Our Buildings Shape Us

I think this solution has helped to minimize some of the complexity a bit. As with any modular design, not only is it more extensible, it’s more maintainable. Each module context is responsible for its own configuration, so this certainly helped to reduce complexity in the InfinitumContext implementation as before it was handling the initialization for the ORM, AOP, and REST pieces. It also worked out in that I made the switch to GitHub2 by setting up four discrete repositories, one for each module.

In retrospect, I would have made things a lot easier on myself if I had taken a more modular approach from the beginning. I ended up having to reengineer quite a bit, although once I had a viable solution, it actually wasn’t all that much work. I was fortunate in that I had things fairly well designed (perhaps not at a very high level, but in general) and extremely organized. It’s difficult to anticipate change, but chances are you’ll be kicking yourself if you don’t. I started the framework almost a year ago, and I never imagined it would grow to what it is today.

  1. I was originally hoping to pull out dependency injection as a separate module, but the framework relies heavily on it to wire up components. []
  2. Now that the code’s pushed to GitHub, I begin the laborious task of migrating the documentation over from Google Code. []

The Importance of Being Idle

“Practice not-doing and everything will fall into place.”

It’s good to be lazy. Sometimes, in programming, it can also be hard to be lazy. It’s this paradox that I will explore today — The Art of Being Lazy. Specifically, I’m going to dive into a design pattern known as lazy loading by discussing why it’s used, the different flavors it comes in, and how it can be implemented.

Lazy loading is a pretty simple concept: don’t load something until you really need it. However, the philosophy can be generalized further: don’t do something until you need to do it. It’s this line of thinking that has helped lead to processes like Kanban and lean software development (and also probably got you through high school). Notwithstanding, this tenet goes beyond the organizational level. It’s about optimizing efficiency and minimizing waste. There’s a lot to be said about optimizing efficiency in a computer program, which is why The Art of Being Lazy is an exceedingly relevant principle.

They Don’t Teach You This in School

My first real job as a programmer was working as a contractor for Thomson Reuters.  I started as a .NET developer (having no practical experience with it whatsoever) working on a web application that primarily consisted of C# and ASP.NET. The project was an internal configuration management database, which is basically just a big database containing information pertaining to all of the components of an information system (in this case, Thomson’s West Tech network, the infrastructure behind their legal technology division).

This CMDB was geared towards providing application-impact awareness, which, more or less, meant that operations and maintenance teams could go in and see what applications or platforms would be affected by a server going down (hopefully for scheduled maintenance and not a datacenter outage), which business units were responsible for said applications, and who the contacts were for those groups. It also provided various other pieces of information pertaining to these systems, but what I’m getting at is that we were dealing with a lot of data, and this data was all interconnected. We had a very complex domain model with a lot of different relationships. What applications are running on what app servers? Which database servers do they depend on? What NAS servers have what NAS volumes mounted on them? The list goes on.

Our object graph was immense. You can imagine the scale of infrastructure a company like Thomson Reuters has. The crux of the problem was that we were persisting all of this data as well as the relationships between it, and we wanted to allow users of this software to navigate this vast hierarchy of information. Naturally, we used an ORM to help manage this complexity. Since we were working in .NET, and many of us were Java developers, we went with NHibernate.

We wanted to be able to load, say, an application server, and see all of the entities associated with it. To the uninitiated (which, at the time, would have included myself), this might seem like a daunting task. Loading any given entity would result in loading hundreds, if not thousands, of related entities because it would load those directly related, then those related to the immediate neighbors, continuing on in what seems like a never-ending cascade. Not only would it take forever, but we’d quickly run out of memory! There’s simply no way you can deal with an object graph of that magnitude and reasonably perform any kind of business logic on it. Moreover, it’s certainly not scalable, so obviously this would be a very naive thing to do. The good news is that, unsurprisingly,  it’s something that’s not necessary to do.

It’s Good to be Lazy

The solution, of course, as I’ve already hit you across the face with, is a design pattern known as lazy loading. The idea is to defer initialization of an object until it’s truly needed (i.e. accessed). Going back to my anecdote, when we load, for example, an application server entity, rather than eagerly loading all its associated entities, such as servers, applications, BIG-IPs, etc., we use placeholders. Those related entities are then loaded on-the-fly when they are accessed.

Lazy loading can be implemented in a few different ways, through lazy initialization, ghost objects, value holders, and dynamic proxies — each has its own trade-offs. I’ll talk about all of them, but I’m going to primarily focus on using proxies since it’s probably the most widely-used approach, especially within the ORM arena.

Lazy initialization probably best illustrates the concept of lazy loading. With lazy initialization, the object to be lazily loaded is represented by a special marker value (typically null) which indicates that the object has yet to be loaded. Every call to the object will first check to see if it has been loaded/initialized, and if it hasn’t, it gets loaded/initialized. Thus, the first call to the object will load it, while subsequent calls will not need to. The code below shows how this is done.

Ghost objects are simply entities that have been partially loaded, usually just having the ID populated so that the full object can be loaded later. This is very similar to lazy initialization. The difference is that the related entity is initialized but not populated.

A value holder is an object that takes the place of the lazily loaded object and is responsible for loading it. The value holder has a getValue method which does the lazy loading. The entity is loaded on the first call to getValue.

The above solutions get the job done, but their biggest problem is that they are pretty intrusive. The classes have knowledge that they are lazily loaded and require logic for loading. Luckily, there’s an option which helps to avoid this issue. Using dynamic proxies1, we can write an entity class which has no knowledge of lazy loading and yet still lazily load it if we want to.

This is possible because the proxy extends the entity class or, if applicable, implements the same interface, allowing it to intercept calls to the entity itself. That way, the object need not be loaded, but when it’s accessed, the proxy intercepts the invocation, loads the object if needed, and then delegates the invocation to it. Since proxying classes requires bytecode instrumentation, we need to use a library like Cglib.

First, we implement an InvocationHandler we can use to handle lazy loading.

Now, we can use Cglib’s Enhancer class to create a proxy.

Now, the first call to any method on foo will invoke loadObject, which in turn will load the object into memory. Cglib actually provides an interface for doing lazy loading called LazyLoader, so we don’t even need to implement an InvocationHandler.

ORM frameworks like Hibernate use proxies to implement lazy loading, which is one of the features we took advantage of while developing the CMDB application. One of the nifty things that Hibernate supports is paged lazy loading, which allows entities in a collection to be loaded and unloaded while it’s being iterated over. This is extremely useful for one-to-many and, in particular, one-to-very-many relationships.

Lazy loading was also one of the features I included in Infinitum’s ORM, implemented using dynamic proxies as well.2 At a later date, I may examine how lazy loading is implemented within the context of an ORM and how Infinitum uses it. It’s a very useful design pattern and provides some pretty significant performance optimizations. It just goes to show that sometimes being lazy pays off.

  1. For more background on proxies themselves, check out one of my previous posts. []
  2. Java bytecode libraries like Cglib are not compatible on the Android platform. Android uses its own bytecode variant. []

Dalvik Bytecode Generation

Earlier, I discussed the use of dynamic proxies and how they can be implemented in Java. As we saw, a necessary part of proxying classes is bytecode generation. From its onset, something I wanted to include in Infinitum was lazy loading. I also wanted to provide support for AOP down the road. Consequently, it was essential to include some way to generate bytecode at runtime.

The obvious choice would be to use a library like Cglib or Javassist, but sadly neither of those would work. That’s because Android does not use a Java VM, it uses its own virtual machine called Dalvik. As a result, Java source code isn’t compiled into Java bytecode (.class files), but rather Dalvik bytecode (.dex files). Since Cglib and Javassist are designed for Java bytecode manipulation, they do not work on the Android platform.1

What’s a programmer to do? Fortunately, some Googlers developed a new library for runtime code generation targeting the Dalvik VM called Dexmaker.

It has a small, close-to-the-metal API. This API mirrors the Dalvik bytecode specification giving you tight control over the bytecode emitted. Code is generated instruction-by-instruction; you bring your own abstract syntax tree if you need one. And since it uses Dalvik’s dx tool as a backend, you get efficient register allocation and regular/wide instruction selection for free.

Even better, Dexmaker provides an API for directly creating proxies called ProxyBuilder. If you followed my previous post on generating proxies, then using ProxyBuilder is a piece of cake. Similar to Java’s Proxy class, ProxyBuilder relies on an InvocationHandler to specify a proxy’s behavior.

Dexmaker enabled me to implement lazy loading and AOP within the Infinitum framework. It also opens up the possibility of using Mockito for unit testing in an Android environment because Mockito relies on proxies for generating mocks.2

  1. ASMDEX, a Dalvik-compatible bytecode-manipulation library was released in March 2012, meaning Cglib could, in theory, be ported to Android since it relies on ASM. []
  2. Infinitum is actually unit tested using Robolectric, which allows for testing Android code in a standard JVM. []

Proxies: Why They’re Useful and How They’re Implemented

I wanted to write about lazy loading, but doing so requires some background on proxies. Proxies are such an interesting and useful concept that I decided it would be worthwhile to write a separate post discussing them. I’ve talked about them in the past, for instance on StackOverflow, so this will be a bit of a rehash, but I will go into a little more depth here.

What is a proxy? Fundamentally, it’s a broker, or mediator, between an object and that object’s user, which I will refer to as its client. Specifically, a proxy intercepts calls to the object, performs some logic, and then (typically) passes the call on to the object itself. I say typically because the proxy could simply intercept without ever calling the object.

proxy

A proxy works by implementing an object’s non-final methods. This means that proxying an interface is pretty simple because an interface is merely a list of method signatures that need to be implemented. This facilitates the interception of method invocations quite nicely. Proxying a concrete class is a bit more involved, and I’ll explain why shortly.

Proxies are useful, very useful. That’s because they allow for the modification of an object’s behavior and do so in a way that’s completely invisible to the user. Few know about them, but many use them, usually without even being aware of it. Hibernate uses them for lazy loading, Spring uses them for aspect-oriented programming, and Mockito uses them for creating mocks. Those are just three (huge) use cases of many.

JDK Dynamic Proxies

Java provides a Proxy class which implements a list of interfaces at runtime. The behavior of a proxy is specified through an implementation of InvocationHandler, an interface which has a single method called invoke. The signature for the invoke method looks like the following:

The proxy argument is the proxy instance the method was invoked on. The method argument is the Method instance corresponding to the interface method invoked on the object.  The last argument, args, is an array of objects which consists of the arguments passed in to the method invocation, if any.

Each proxy has an InvocationHandler associated with it, and it’s this handler which is responsible for delegating method calls made on the proxy to the object being proxied. This level of indirection means that methods are not invoked on an object itself but rather on its proxy. The example below illustrates how an InvocationHandler would be implemented such that “Hello World” is printed to the console before every method invocation.

This is pretty easy to understand. The invoke method will intercept any method call by printing “Hello World” before delegating the invocation to the proxied object. It’s not very useful, but it does lend some insight into why proxies are useful for AOP.

An interesting observation is that invoke provides a reference to the proxy itself, meaning if you were to instead call the method on it, you would receive a StackOverflowError because it would lead to an infinite recursion.

Note that the InvocationHandler alone is of no use. In order to actually create a proxy, we need to use the Proxy class and provide the InvocationHandler. Proxy provides a static method for creating new instances called newProxyInstance. This method takes three arguments, a class loader, an array of interfaces to be implemented by the proxy, and the proxy behavior in the form of an InvocationHandler. An example of creating a proxy for a List is shown below.

The client invoking methods on the List can’t tell the difference between a proxy and its underlying object representation, nor should it care.

Proxying Classes

While proxying an interface dynamically is relatively straightforward, the same cannot be said for proxying a class. Java’s Proxy class is merely a runtime implementation of an interface or set of interfaces, but a class does not have to implement an interface at all. As a result, proxying classes requires bytecode manipulation. Fortunately, there are libraries available which help to facilitate this through a high-level API. For example, Cglib (short for code-generation library) provides a way to extend Java classes at runtime and Javassist (short for Java Programming Assistant) allows for both class modification and creation at runtime. It’s worth pointing out that Spring, Hibernate, Mockito, and various other frameworks make heavy use of these libraries.

Cglib and Javassist provide support for proxying classes because they can dynamically generate bytecode (i.e. class files), allowing us to extend classes at runtime in a way that Java’s Proxy can implement an interface at runtime.

At the core of Cglib is the Enhancer class, which is used to generate dynamic subclasses. It works in a similar fashion to the JDK’s Proxy class, but rather than using a JDK InvocationHandler, it uses a Callback for providing proxy behavior. There are various Callback extensions, such as InvocationHandler (which is a replacement for the JDK version), LazyLoader, NoOp, and Dispatcher.

This code is essentially the same as the earlier example in that every method invocation on the proxied object will first print “Hello World” before being delegated to the actual object. The difference is that MyClass does not implement an interface, so we didn’t need to specify an array of interfaces for the proxy.

Proxies are a very powerful programming construct which enables us to implement things like lazy loading and AOP. In general, they allow us to alter the behavior of objects transparently. In the future, I’ll dive into the specific use cases of lazy loading and AOP.

A Look at Spring’s BeanFactoryPostProcessor

One of the issues my team faced during my time at Thomson Reuters was keeping developer build times down. Many of the groups within WestlawNext had a fairly comprehensive check-in policy in that, after your code was reviewed, you had to run a full build which included running all unit tests and endpoint tests before you could commit your changes. This is a good practice, no doubt, but the group I was with had somewhere in the ballpark of 6000 unit tests. Moreover, since we were also testing our REST endpoints, it was necessary to launch an embedded Tomcat instance and deploy the application to it before those tests could execute.

Needless to say, build times could get pretty lengthy. I think I recall, at one point, it taking as long as 20 minutes to complete a full build. If a developer makes three commits in a day, that’s an hour of lost productivity. Extrapolate that out to a week and five hours are wasted, so you get the idea.

Of course, there were things we could do to cut down on that time — disabling the Cobertura and Javadoc Ant tasks for instance — but that only gets you so far. The annoying thing was that you typically had a Tomcat server running with the application already deployed, yet the build process started up a whole other instance in order to run the endpoint tests.

I explored the possibility of having endpoint tests run against a developer’s local server (or any server, in theory) by introducing a new property to the developer build properties file. It seems like a pretty simple concept: if the property doesn’t exist, run the tests normally by starting up an embedded Tomcat server. If it does exist, then simply route the HTTP requests to the specified host. Granted, it’s not going to significantly reduce the build time, but anything helps.

Unfortunately, it was not that simple. That’s because we couldn’t just run endpoint tests against the “live” app. The underlying issue was that our API, which we called ourselves from JavaScript and was also exposed to other consumers, relied on some other WestlawNext web services, such as user authentication and document services. We weren’t doing end-to-end integration testing, we were just testing our API. As a result, we used a separate Spring context which allowed the embedded Tomcat hook to deploy the application using client stubs in place of the actual web service clients.

So, things started to look a little moot. A developer would have to start their Tomcat server such that the client stub beans were registered with the Spring context in place of the normal client bean implementations. At the very least, it presented an interesting exercise. It was especially interesting because the client stubs were not part of the application’s classpath, they were separate from the app’s source and compiled to a bin-test directory.

Introducing the BeanFactoryPostProcessor

The solution I came up with was to implement one of Spring’s less glamorous (but still really neat) interfaces, the BeanFactoryPostProcessor. This interface provides a way for applications to modify their Spring context’s bean definitions before any beans get created. In my case, I needed to replace the client beans with their stub equivalents.

We start by implementing the interface, which has a single method, postProcessBeanFactory.

So the question is how do we implement registerClientStubBeans? This is the method that will overwrite the client beans in the application context, but in order to avoid the dreaded NoClassDefFoundError, we need to dynamically add the stub classes to the classpath.

The addClasspathDependencies method will add the stubs to the classpath, while getClientStubBeans will do just as its name suggests. I’ve also created a Bean class that will hold a bean name and its BeanDefinition. In order to register beans with the BeanFactory, we use the registerBeanDefinition method and pass in a bean name and corresponding BeanDefinition.

Let’s take a look at how we can add the stubs to the classpath at runtime.

It looks like there’s a lot going on here, but it’s actually not too bad. The addClasspathDependencies method is simply going to call addToClasspath to add some classes we need, which include the stubs in bin-test but also some libraries they rely on in the libs directory. The more interesting code is in the latter of the two methods, which is responsible for taking a File, which will be a .class file, and adding it to the classpath. We do that by getting the context ClassLoader and then, using reflection, we invoke the method “addURL” by passing in the .class URL we want to add.

Lastly, we need to implement the getClientStubBeans method, which returns a list of the bean definitions we want to register with the context.

Again, it’s a lot of code, but it’s not difficult to follow if you take it piece by piece. The getClientStubBeans method is going to get the directory in which the stubs classes are located and pass it to buildBeanDefinitions. This method iterates over each file, extracts the file name (e.g. “com/foo/client/stub/WebServiceClientStub.class”) and converts it into a fully-qualified class name (e.g. “com.foo.client.stub.WebServiceClientStub”). Since we already added the stubs to the classpath, the class is then loaded by this name. Once the class is loaded, we can check if it is indeed a stub by introspectively looking for the ClientStub annotation (this custom annotation makes a bean eligible for auto-detection and specifies a bean name). If it is a stub, we use Spring’s handy BeanDefinitionBuilder to build a BeanDefinition for the stub.

Now, when Spring initializes, it will detect this BeanFactoryPostProcessor and invoke its postProcessBeanFactory method, resulting in the client stubs being registered with the context in place of their respective implementations. It’s a pretty unique use case (and, frankly, not particularly useful for the given scenario), but it helps illustrate how the BeanFactoryPostProcessor interface can be leveraged.

Probabilistic Primality Testing

An exceedingly common question asked in coding interviews is to write a function, method, algorithm, whatever to determine if a number is prime. Prime numbers have a wide range of applications in computer science, particularly with regard to cryptography. The idea is that factoring large numbers into their prime factors is extremely difficult.

“Because both the system’s privacy and the security of digital money depend on encryption, a breakthrough in mathematics or computer science that defeats the cryptographic system could be a disaster. The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers.” -Bill Gates, The Road Ahead

Granted, for most programming positions, this question is asked not to test the candidate’s knowledge of determining primality, but rather as an exercise to gain insight into his or her thought process on a well-understood problem.

The Naive Approach

The common approach to this problem is a simple brute-force solution. That is, divide the number {n} by every integer from 2 to n-1 and check the remainder. This is certainly the easiest way to solve the problem.

This is also incredibly inefficient. We can improve it a bit by checking factors from 2 to n/2, but that still isn’t very good for huge numbers. If n is composite (i.e. it’s not prime), then it can be factored into two numbers, one of which must be less than or equal to sqrt{n}. With this in mind, we can improve our algorithm even further.

These are the solutions your interviewer will probably be looking for, starting with the most naive approach and improving from that, but even the revised version is not very efficient.

Probabilistic Testing

A slightly more sophisticated way to go about checking primality is to use probabilistic testing. Probabilistic tests use numbers, a, chosen at random from some sample space. This technique introduces a probability of error which can be reduced by repeating the test with independently selected a values.

There are a number of different probabilistic primality tests, some better than others.  One of the more simple probabilistic primality tests is the Fermat primality test, which is based on Fermat’s little theorem and is used in PGP and RSA encryption. The theorem states that, if p is prime, then a^{p-1} equiv 1 pmod{p} where 1leq a< p.

This method will indicate if p is a probable prime, but since we’re only selecting a single a value, the possibility of p being incorrectly identified as a prime is high. In such situations, a is referred to as a Fermat liar. As suggested earlier, we can minimize this probability by repeating the process k times and selecting k witnesses.

This allows us to improve the probability of correctly determining the primality of p by performing k iterations, in which the probability of error becomes vanishingly small for large values of k. Nonetheless, being probabilistic in nature, this algorithm is inherently nondeterministic, which is really just a fancy way of saying we can get different results on different runs for the same input.

Polynomial, Deterministic Algorithms

Only within the last decade has a deterministic, polynomial-time algorithm for testing primality been developed. In 2002, the AKS primality test was discovered, moving primality into the P complexity class. The algorithm determines if a given number is prime or composite in polynomial time, and it’s the first to be simultaneously general, polynomial, unconditional, and deterministic. Try busting that out at an interview.

For the mathematically inclined (of which I do not consider myself), primality presents some interesting uses and even more interesting problems. It’s remarkable that a fast, deterministic solution for such a well-defined problem was found only in the last 10 years, and it makes you wonder what tough problems will be solved in the next 10 years and beyond.

Solving the Referential Integrity Problem

“A man with a watch knows what time it is. A man with two watches is never sure.”

I’ve been developing my open source Android framework, Infinitum, for the better part of 10 months now. It has brought about some really interesting problems that I’ve had to tackle, which is one of the many reasons I enjoy working on it so much.

Chicken or the Egg

Although it’s much more now, Infinitum began as an object-relational mapper which was loosely modeled after Hibernate. One of the first major issues I faced while developing the ORM component was loading object graphs. To illustrate what I mean by this, suppose we’re developing some software for a department store. The domain model for this software might look something like this:

As you can see, an Employee works in one Department, and, conversely, a Department has one or more Employees working in it, forming a many-to-one relationship and resulting in the class below.

Pretty straightforward, right? Now, let’s say we want to retrieve the Employee with, say, the ID 4028 from the database. Thinking about it at a high level and ignoring any notion of lazy loading, this appears to be rather simple.

1. Perform a query on the Employee table.

2. Instantiate a new Employee object.
3. Populate the Employee object’s fields from the query result.

But there’s some handwaving going on in those three steps, specifically the last one. One of the Employee fields is an entity, namely department. Okay, this shouldn’t be a problem. We just need to perform a second query to retrieve the Department associated with the Employee (the result of the first query is going to include the Department foreign key — let’s assume its 14).

Then we just create the Department object, populate it and assign it to the respective field in the Employee.

Once again, there’s a problem. To understand why, it’s helpful to see what the Department class actually looks like.

Do you see what the issue is? In order to construct our Employee, we need to construct his Department. In order to construct his Department, we need to construct the Employee. Our object graph has a cycle that’s throwing us for a (infinite) loop.

Breaking the Cycle

Fortunately, there’s a pretty easy solution for this chicken-or-the-egg problem. We’ll make use of a HashMap to keep tabs on our object graph as we incrementally build it. This will make more sense in just a bit.

We’re going to use a HashMap keyed off of an integer hash where the map values will be the entities in the object graph.

The integer hash will be a unique value computed for each entity we need to load to fulfill the object graph. The idea is that we will store the partially populated entity in the HashMap to have its remaining fields populated later. Loading an entity will take the following steps:

  1. Perform query on the entity table.
  2. Instantiate a new entity object.
  3. Populate the entity object fields which do not belong to a relationship from the query result.
  4. Compute the hash for the partial entity object.
  5. Check if the HashMap contains the computed hash.
  6. If the HashMap contains the hash, return the associated entity object (this breaks any potential cycle).
  7. Otherwise, store the entity object in the HashMap using the hash as its key.
  8. Load related entities by recursively calling this sequence.

Going back to our Employee problem, retrieving an Employee from the database will take these steps:

  1. Perform query on the Employee table.
  2. Instantiate a new Employee object.
  3. Populate the Employee object fields which do not belong to a relationship from the query result.
  4. Compute the hash for the partial Employee object.
  5. Check if the HashMap contains the computed hash (it won’t).
  6. Store the Employee object in the HashMap using the hash as its key.
  7. Perform query on the Department table.
  8. Instantiate a new Department object.
  9. Populate the Department object fields which do not belong to a relationship from the query results.
  10. Compute the hash for the partial Department object.
  11. Check if the HashMap contains the computed hash (again, it won’t).
  12. Store the Department object in the HashMap using the hash as its key.
  13. The cycle will terminate and the two objects in the HashMap, the Employee and the Department, will be fully populated and referencing each other.

Considering the HashMap is not specific to any entity type (i.e. it will hold Employees, Departments, and any other domain types we come up with), how do we compute a unique hash for objects of various types? Moreover, we’re computing hashes for incomplete objects, so what gives?

Obviously, we can’t make use of hashCode() since not every field is guaranteed to be populated. Fortunately, we can take advantage of the fact that every entity must have a primary key, but, unless we’re using a policy where every primary key is unique across every table, this won’t get us very far. We will include the entity type as a factor in our hash code. Here’s the code Infinitum currently uses to compute this hash:

This hash allows us to uniquely identify entities even if they have not been fully populated. Our cycle problem is solved!

Maintaining Referential Integrity

The term “referential integrity” is typically used to refer to a property of relational databases. However, when I say referential integrity, I’m referring to the notion of object references in an object graph. This referential integrity is something ORMs must keep track of or otherwise you run into some big problems.

To illustrate this, say our department store only has one department and two employees who work in said department (this might defeat the purpose of a department store, but just roll with it). Now, let’s say we retrieve one Employee, Bill, from the database. Once again ignoring lazy loading, this should implicitly load an object graph consisting of the Employee, the Department, and the Employees assigned to that Department. Next, let’s subsequently retrieve the second Employee, Frank, from the database. Again, this will load the object graph.

Bill and Frank both work in the same Department, but if referential integrity is not enforced, objects can become out of sync.

The underlying problem is that there are two different copies of the Department object, but we must abide by the Highlander Principle in that “there can be only one.” Bill and Frank should reference the same instance so that, regardless of how the Department is dereferenced, it stays synced between every object in the graph.

In plain terms, when we’re retrieving objects from the database, we must be cautious not to load the same one twice. Otherwise, we’ll have two objects corresponding to a single database row and things will get out of sync.

Enter Identity Map

This presents an interesting problem. Knowing what we learned earlier with regard to the chicken-or-the-egg problem, can we apply a similar solution? The answer is yes! In fact, the solution we discussed earlier was actually masquerading as a fairly common design pattern known as the Identity Map, originally cataloged by Martin Fowler in his book Patterns of Enterprise Application Architecture.

The idea behind the Identity Map pattern is that, every time we read a record from the database, we first check the Identity Map to see if the record has already been retrieved. This allows us to simply return a new reference to the in-memory record rather than creating a new object, maintaining referential integrity.

A secondary benefit to the Identity Map is that, since it acts as a cache, it reduces the number of database calls needed to retrieve objects, which yields a performance enhancement.

An Identity Map is normally tied to some sort of transactional context such as a session. This works exceedingly well for Infinitum because its ORM is built around the notion of a Session object, which  can be configured as a scoped unit of work. The Infinitum Session contains a cache which functions as an Identity Map, solving both the cycle and the referential integrity issues.

It’s worth pointing out, however, that while an Identity Map maintains referential integrity within the context of a session, it doesn’t do anything to prevent incongruities between different sessions. This is a complex problem that usually requires a locking strategy, which is beyond the scope of this blog post.

Under the Hood

It may be helpful to see how Infinitum uses an Identity Map to solve the cycle problem. The method createFromCursor takes a database result cursor and transforms it into an instance of the given type. It makes use of a recursive method that goes through the process I outlined earlier. The call to loadRelationships will result in this recursion.

Entities are stored in the Session cache as they are retrieved, allowing us to enforce referential integrity while also preventing any infinite loops that might occur while building up the object graph.

So that’s it! We’ve learned to make use of the Identity Map pattern to solve some pretty interesting problems. We looked at how we can design an ORM to load object graphs that contain cycles as well as maintain this critical notion of referential integrity. We also saw how the Identity Map helps to give us some performance gain through caching. Infinitum’s ORM module makes use of this pattern in its session caching and many other frameworks use it as well. In a future blog entry, I will talk about lazy loading and how it can be used to avoid loading large object graphs.