blog post from Prismatic the other day and it got me thinking about how we, as programmers, have diverged so much from our roots. In the beginning, we designed small tools which did one thing and did it well. Now we’re more concerned with
meeting deadlines and
shipping code as fast as possible. We’ve fallen in love with the phrase,
Release often. Release early. – The Cathedral and the Bazaar, Eric S. Raymond
and as a result have rushed design decisions or used something that made the decision for us. Inevitably, that means we’ve used some framework, somewhere in our code or stack, which could have easily been replaced with either a simple tool or a collection of libraries and some glue.
There’s nothing wrong with
RORE as a guiding principle. Focusing first on an
MVP which is not feature rich is a great business strategy. But it’s a business strategy, not a software strategy. At the cost of repeating what was said by the Prismatic team, there will be a point where using a framework costs you more than not using a framework. I will even argue that in most cases you can work just as fast, produce just as high a quality piece of code, and be in more control of your product if you avoid frameworks at all costs.
Frameworks force your hand. In many cases they cause
you to structure your code to work around their limitations. Case in point, ORMs. Look at the number of articles that come up with a quick search of Google using the keywords “ORM” and “problem:”
- The N+1 selects problem
- Coding Horror on ORMs
- The Vietnam of Computer Science
- ORM: A Solution that Creates Many Problems
If you were to ask me about ORMs you’d quickly find out that I’m a rather vocal opponent of using them for any medium to large scale project. Which brings me to the point of this post.
My History With DB “Solutions”
In projects past, I worked in environments where the schema changed at least once a week and the code took 20 minutes to compile. I was required to support all database types and schemas that were handed to us, “within reason.” That pretty much meant whoever wanted whatever schema on whatever database for whatever demo they were about to give, generally with only a few hours of notice. We were already using a third-party database abstraction layer to help “ease the burden” of database interaction so other than a few config files what was the problem?
The speed of this abstraction layer depended heavily on the underlying database. On some databases, one query mapped to a single join, on another table, the same query mapped to a 5 table join. Thus, to reach acceptable performance speeds of the code, the queries used changed and the code handling it had to change as well. By using this ORM, in this manner, we lost most, if not all, of the benefits of using an ORM. The first cut of the code went fast, the next few were a frustrating experience of compile, test, compile, test, explain the delay to management, etc.
While my experiences with ORMs and all database “solutions” have improved significantly since that time, I am still skeptical of the purported benefits promised by these solutions. Nothing beats writing bare SQL for fine tuning performance, memory usage, caching strategies and ease of debugging issues. That said, writing bare SQL is time consuming, error prone, and many times if the DBAs change the schema, you, the developer won’t know until a run-time error happens. Thus sprang the genesis for the ideas of Squealer, a tool which could write the code for you based upon your queries and validated against the DB you were hoping to use.
Squealer, is my way of avoiding an ORM yet still reaping many of the rewards. It’s not a library but rather a tool which builds code based on the database you’re working with. How it works is simple. Take an
automatic code generation exercise based on parsing text and then apply it to parsing a database. For classes which represent an individual table you’d have access to column names, column data-types, column default values, and any comments the DBAs left in the code.
Since this is a Scala solution and something that I started working on with conviction after attending
NEScala ’12, I’m using a few libraries I heard of or watched a presentation about while attending it:
- TreeHugger, a library which exposes parts of the Scala AST to generate code
- Gll-Combinators, a parsing library with an upper bound of O(N⊃3;) and capable of handling ambiguity
- Config, a config library for JVM languages
I’d like to switch to using
Bill Venners was there holding a session on the next version of it. I’m also thinking of forking a co-worker’s SQL parsing and conversion library,
Seekwell, to port it to using gll-combinators. This might open up the possibility of writing queries once and porting DB specific expressions to different DBs.
The current version of
Squealer only does one thing right now, it parses the database and generates classes and companion classes based on the database tables. You can and should limit it to a select few tables otherwise you’ll wind up with classes generated for meta-tables. All data mappings are the suggested data
mappings by Oracle.
The next step is to add in the ability to parse SQL statement and generate code based on those statements. I’m currently writing the code for this but I will admit I’m not happy with it. Hopefully I’ll be able to find enough time to finish before
Scalathon ’12 so that I can present it.
Squealer: An Anti-ORM Influenced Scala Tool for Working with Relational DB from our
JCG partner Owein Reese at the
Statically Typed blog.