Monday, June 8, 2009

Book Review: Enterprise Service Bus by David Chappell

Enterprise Service Bus
David Chappell
2004, 352 pages
ISBN-10: 0596006756
ISBN-13: 978-0596006754








This is a review of the best known ESB text from David Chappell.

Main Points

If you don't have a chance to read the book, here are the main points covered on ESB:
  • ESB is the next generation of application integration, following the EAI hubs of yesterday.
  • ESB is a Message Oriented Middleware (MOM) solution - it creates loosely coupled integrations by enabling integrated applications to simply post messages to the ESB and the ESB handles the rest.
  • An ESB is more than just a queue, it can perform sophistacated routing logic to route the messages in different ways based on the contents of the message.
  • ESB enables a distributed deployment model - each business unit/office can host their own ESB. Many messages will likely stay within the same ESB instance. Only some messages need to be routed to a different ESB instance.
Review:

Summary: This is a great book for those who know nothing of ESB already. However, if you are already up to speed on the current trends in IT or enterprise software, this book probably won't provide much new material for you. To the credit of David et al, ESB has been mainstream for years now and you probably already know what is inside.

Details: This is a well written book, and something I would recommend for anyone who is unfamiliar with the premise of ESB. It is an O'Reilly book, but one that isn't focused on developers.

Which leads to the main drawback for me - I was expecting more technical depth. If you are a developer, and want to see lots of code samples, this isn't the book for you. This is aimed more at enterprise architects and IT business owners.

Book Review: The TUXEDO System: Software for Constructing and Managing Distributed Business Applications by Andrade et al

The TUXEDO System: Software for Constructing and Managing Distributed Business Applications
Juan M. Andrade, Mark Carges, Terence Dwyer, Stephen Felts


1996, 496 pages
ISBN-10: 0201634937
ISBN-13: 978-0201634938



I regret that I only first read this book just a few months ago. As a long time employee of BEA (owner of Tuxedo), I should have gotten to it much sooner. But, that's how it went for me. I was aware of Tuxedo at a casual level before reading this book. I was also well acquainted with J2EE, which gave me an interesting perspective on this technology.

Main Concepts

This section describes the highlights of the technology. I am going to assume a J2EE frame of reference because that was mine when I read it:
  • Tuxedo is a legendary piece of software. Many mission critical applications like real time banking and life critical applications run on Tuxedo. If there are computers deployed in an underground bunker, chances are that they are running Tuxedo. Tuxedo was developed primarily in the 80's and early 90's.
  • Most of the capabilities of J2EE (developed in the late 90's/early 2000's) are also found in Tuxedo.
  • Tuxedo provides platform independence to C programmers. Tuxedo handles network code, portable data structures (buffers), and byte order problems experienced when messaging between different computer architectures.
  • The core concept in Tuxedo is that of a service. A service is written and deployed to one of the Tuxedo servers.
  • Clients call services via a synchronous or asynchronous Tuxedo API call. The client passed a buffer that contains the parameters. Depending on configuration, the call may pass the transactional context.
  • Tuxedo service invocation is internally brokered via message queues. This is the key to its scalability. The queues provide a natural throttling of inbound requests, and can be done without blocking the client (if async is used).
Review

Summary: I highly recommend this book for developers/architects of systems that need to scale. Tuxedo initiated and/or popularized many great ideas for building scalable, fault-tolerant enterprise systems. Even if you won't work with Tuxedo, and plan to use J2EE, it is helpful to understand the history of the technology in which you work.

Details: This is the part where I'd normally nitpick the issues I found with the book. Didn't find any - I think its good!

Sunday, June 7, 2009

Book Review: Explorer's Guide to the Semantic Web by Thomas Passin


Explorer's Guide to the Semantic Web
Thomas Passin
2004 | 304 pages
ISBN: 1932394206
Manning





This is another book review - this time on the topic of Semantic Web.

Main Concepts


This section talks about the main ideas I learned from the book. If you don't have time to read the book, these are the main things:
  • The concept of the Semantic Web is to make much of the information on the internet not only human readable, but also available for machines to process efficiently and correctly. If you read this blog, you probably already knew that, so...
  • There is a layer cake of technologies that can be used in the solution. This diagram is from Tim Berners-Lee et al, and is presented in the book:

  • RDF is clearly a central part of this. The book covers this tech in chapter 2. It essentially is a mechanism for declaring factual statements. It works by mapping two resources with a relationship. Like: "Wrist partof Arm". Wrist is the Subject, partof is the Predicate, and Arm is called the Object.
  • Once you have a collection of RDF statements, you can reason on that repository. For example, an insurance company can create a rule "Coverage for any partsof Arm injuries is 93%". When a claim with a Wrist injury arrives, the RDF repository will be used to reason that it it partof an Arm, and therefore the 93% applies.
  • An interesting point of applying logic is: you can infer much more in a closed fact repository than in an open repository. Meaning, if you ask the question "Is Joe Smith an employee at ABC Inc?" to the corporate database (a closed repo) you can infer the answer is NO if no record exists for Joe Smith. In an open repository (the internet) the absence of a record means nothing. It can simply mean no one has posted it publicly.
  • The semantic web gets really nasty when you consider: 1) some sites will post incorrect information 2) sites will incorrectly annotate their information 3) facts will be posted in different languages. Technologies like RDF will solve some problems, but not all.
  • OWL is a language for describing an Ontology - essentially a type structure for resources. OWL can express constraints (a car can only have 4 wheels).
Review

Summary: The book I possess was published in 2004. It is unfortunately out of date in many areas, so much so that I wouldn't recommend this book. Otherwise it is well written, and covers the topic pretty well (in the state they were in 2004).

Details: The first chapter motivates the Semantic Web, and the second chapter explains RDF. So far so good. But thereafter, I hit many places where the book just seemed outdated. For example:
  • The Annotations chapter discusses the need for users to be able to contribute semantics to web pages they don't control. Several solutions are presented, but the current obvious solution that we have, del.icio.us, isn't mentioned. Tagging is the preferred solution to this problem today, and isn't mentioned at all.
  • Page 103 makes the statement "Some news sites...allow readers to comment on their stories." Yes, I remember when that was a novelty too, but in 2009 that is the norm.
  • The Search chapter, without calling out any specific sentence, seems to describe the world of search as it stood in 2003/4. Oh, well, right. Search has evolved quite a bit since 2004 when the book was written.
I am surprised Manning hasn't published an update to this book - Semantic Web seems to be a popular enough topic.

Thursday, June 4, 2009

Weeding the Wiki Garden

I just started at my new job about 4 weeks ago. The company has been up and running for almost 5 years, and so there is a lot of history. To be effective in my job, I need to understand the history of the product and the discussions that lead to its current design.

The company wiki seemed like a great place to go.

However, I had trouble making progress. As like with most wikis, it had grown over time and its structure was fragmented. The SocialText blog summarizes it pretty well:

Like any organic process, however, it can be messy. There will be duplicate pages with slightly similar names, links to nonexistent pages, pages that aren't linked to, and so on.

My biggest problem actually was in the number of places where technical information was stashed. I kept bumping into new pages in different spaces in the wiki that contained important information. It was all over the place.

Wiki Gardening Advice

Frustrated, I looked for advice on how to properly perform gardening on a wiki. I had done this before back on the WebLogic Portal wiki, but that was a product I knew well. I wanted to see if there was general guidance for this kind of thing, especially for someone walking into a wiki without a lot of context.

The best document I found was from SocialText in this blog entry: Wiki Gardening Tips

I did everything from that list, plus a couple more:

Deferred Weeding

Instead of removing every page that I felt needed pruning, I tagged some with label "weed". This will allow me to ask a veteran's advice before removing the pages I wasn't sure of.

Two Use Cases: Browse and Search

Be aware of the two main use cases: browsing and searching. Newbies like me will be doing mostly browsing, because we don't know what we need. We need to be able to have a rational navigation structure so we can systematically read pages from the site. On the opposite side are the veterans. Veterans do searches since they know what they are looking for.

This is important to keep in mind when gardening the wiki. When working with a page, it is helpful to know which use case(s) are important. It will help you to decide how to work with it.

For pages that provide basic information, make sure they are easily found in the place people will expect in the navigation hierarchy. For pages that relate to some obscure detail - don't worry too much about where it lives. Those pages are probably only going to be accessed via search.

Book Review: Microformats: Empowering Your Markup for Web 2.0 by John Allsopp

Microformats: Empowering Your Markup for Web 2.0
By John Allsopp

  • Published: 26th March 2007
  • ISBN-10: 1-59059-814-8






This is a review of the book Microformats: Empowering Your Markup for Web 2.0 that I read it about 2 months ago.

Main Concepts

This section talks about the main ideas I learned from the book. If you don't have time to read the book, these are the main things:
  • Microformats is a technology that supports the concept of semantic web.
  • It weaves semantic information into standard presentation HTML. It works by inserting specific keywords into standard attributes like rel and class.
  • An HTML page with embedded microformats is viewable just like any other HTML page. However, certain tools can parse the page an extract information such as a person's contact information, an event's date and time, or geographic location.
  • Tools can process a microformat enabled page with higher fidelity than simple text parsing. Mashup tools, for example, can extract the information and combine it with information from another source.
  • There are dozens of available microformats.
Review

Summary: Overall, the writing itself was fine. What I didn't like however was the signal to noise ratio. Meaning, after reading this 300 page book, I felt like I got about 75 pages of value.

Details: I was already somewhat familiar with Microformats before buying this book. Perhaps I was not the target audience.

I was expecting a focused treatment of Microformats, but often found I was distracted by the supporting material, e.g.:
  • There are many pages spent on CSS styling of microformats. For example, pages 150-160 are devoted to styling an hCard microformat so that it has rounded corners.
This may be ok, if this is what you are looking for. But I was looking for a more focused treatment of the subject.

Book Review: The Definitive Guide to Terracotta: Cluster the JVM for Spring, Hibernate and POJO Scalability by Ari Zilka

The Definitive Guide to Terracotta: Cluster the JVM for Spring, Hibernate and POJO Scalability
By Ari Zilka
By: Terracotta, Inc.
ISBN10: 1-59059-986-1
ISBN13: 978-1-59059-986-0
368 pp.
Published Jun 2008


This is a review of a book a read about 6 weeks ago on Terracotta, the Java clustering technology.


http://www.apress.com/book/view/9781590599860





Main Concepts

This section talks about the main ideas I learned from the book. If you don't have time to read the book, these are the main things:

  • Terracotta is a mechanism for sharing objects between multiple JVMs. A managed object is considered the same instance to all JVMs.
  • Terracotta works by injecting itself into the JVM classloading hierarchy, and copies objects byte-by-byte to other JVMs as needed.
  • Terracotta servers can be clustered.
  • Terracotta provides optional locking semantics around read/updates to these objects. Locks can optionally be held across a cluster of Terracotta servers.
  • Terracotta can optionally ensure that all JVMs that are using a particular object are kept in sync
  • Terracotta can be configured to persist the state of managed objects to disk, such that if all TC nodes fail the state will preserved on restart.
  • Terracotta is different than a distributed data cache product like Oracle Coherence. Though Terracotta can be used for distributed caching, its focus is a bit different.
Review

Summary: This book is worth buying, assuming you are a Java developer like me. I found the coverage to be solid, and the technology itself is interesting.

Details: As stated, I felt that it is a good technical book, and worth your time. But, it isn't flawless. I had issues with the quality of writing.

First, a quick aside. I feel that compared to other engineers, I am a pretty good technical writer. I have seen some awful wiki pages in my time, and feel that I usually do better than most. However, I know my limitations. I have written various documentation chapters and whitepapers in my time that were reviewed by technical writers. These always come back to me with heavy edits, and each time I am disappointed to find out that I am not that good when compared to professional technical writers. Technical writing is a discipline, and it isn't easy.

The reason I bring this up is that I found this book hard to follow at times. I found the language to be loose - in a lot of places the text was not concise enough for my taste. To be clear, it wasn't horrible. It actually felt like something I would write. Which is to say, pretty good for an engineer, but a professional tech writer would have done better. Given that books are published much like software, I assume this was due to impossible deadlines and lack of manpower.

In summary, the quality is tolerable, but there is room for improvement. Also, I submitted a number of errata to the Apress site, but as of 6 weeks later they seem to have been ignored. I was told to expect a follow up email from the author, but no luck.

Tutorial: How to Install the Linux Standalone Flashplayer on Ubuntu 9.0.4

This entry explains how to install Adobe Flashplayer on a Ubuntu 9.0.4 system. We will start with a possible use case that brought you here.

A Use Case: You are Running a Maven build that uses FlexMojos

FlexMojos requires the standalone Flashplayer (not the Firefox plugin). If you don't install it, you will get an error like:

java.lang.NullPointerException
at org.sonatype.flexmojos.test.threads.AsVmLauncher.stop(AsVmLauncher.java:181)
Well, that exact error message will change at some point because it was logged as a bug. What it means is that you don't have the flashplayer installed.

Installing the Standalone Linux Flashplayer

Instructions:
  • Go specifically to THIS page at adobe. Don't follow the regular "Get Flash Player" link because it will think you just want the plugin.
  • Look for the link for the "Linux Standalone Player" download
  • Download and save to your hard drive. In there, navigate to /standalone/release and extract the binary. Put it in /usr/bin
    • tar -xvf flashplayer_xxx.tar.gz
    • sudo cp flashplayer /usr/bin/flashplayer
  • Restart your shell, and make sure you can access it: $ flashplayer
  • Bash out with your Flash out.

Tutorial: How To Install Postgres on Ubuntu

The Ubuntu Synaptic package manager will help you here. Make sure you install these two packages:

  • postgresql
  • pgAdmin 3

After install, you will need to change the default password for the postgres user.

  • sudo -u postgres psql postgres
  • \password postgres
  • Provide your password of choice

Now, launch the pgAdmin console via the menu Applications -> Programming -> pgAdmin 3. Log in using these values:

  • localhost
  • user: postgres
  • password you just set

Create a non-Admin user (aka login role):

  • Right click on Login Roles in the tree, select New Login Role...
  • Name: myuser
  • Password: mypassword
  • If needed, check the box for Superuser privs, Create... privs

Finally, to allow the build to operate without prompting for passwords, you need to:

  • Open the file ~/.pgpass for edit (or create, if doesn't exist)
  • Add this line:
    • localhost:5432:*:myuser:mypassword

You should be able to use Postgres like a champ now. You can now login with the new user using pgAdmin

Tutorial: Configuring Subversion 1.6.2 on Ubuntu 9.0.4 with Eclipse 3.4

Having a problem with SVN commands not working? Let me guess, you have:

  • Ubuntu 9.0.4
  • Eclipse 3.4
  • Subclipse
  • and you need to connect to a Subversion 1.6.2 repository

If you are using the latest Subclipse plugin to checkout your projects, it will be working as a SVN 1.6.x client. This means it will write 1.6.2 file structures to disk.

Now you want to use a command line to do SVN commands. That means you need the 1.6.x command line client. BUT, if you do a "sudo apt-get subversion" it currently gives you....wait for it....version 1.5.4 for you command line. And you cannot use a 1.5.x client on a branch that also uses a 1.6.x client.

So, you have to do the following workaround to get command line version 1.6.x

  • Download the 1.6.2 RPM from http://www.open.collab.net/downloads/subversion/redhat.html
  • sudo apt-get install alien
  • sudo alien CollabNetSubversion-client-1.6.2-1.i386.rpm
  • sudo dpkg -i collabnetsubversion-client_1.6.2-2_i386.deb
  • This installs 1.6.2 into /opt/CollabNet/bin
  • Now, fix up your PATH env to make sure it is using 1.6.2 ("which svn")

Tutorial: Solution for configuring OpenVPN on Ubuntu 9.0.4 with a pkcs12 key. .p12 #PKCS12

Configuring OpenVPN on Ubuntu if you've been given a .pkcs12 file is not a joyride (at least, as of Ubuntu 9.04). Currently, Ubuntu's network manager does not recognize #PKCS12.

This blog entry assumes your IT department has provided:

  • Your #PKCS12 file, possibly suffixed with .p12. E.g. "username.p12"
  • Your password for the VPN
  • An .ovpn file, which is a text file that contains your VPN configuration

So, here is what you need to do:

  • First, make sure you have the OpenVPN stuff installed:
    • Launch the package manager: System->Admin->Synaptic Pkg Mgr
    • Make sure you install (or have already) "openvpn" and "network-manager-openvpn"
    • Probably need to reboot at this point
  • Second, you will need to break the pkcs12 key you were given into three seperate files: .pem, .crt, .key
    • See this blog entry as the source of this next part
    • Open a terminal window and navigate to the folder where you download the .p12 file IT gave you
    • Execute the following commands, using the key password IT gave you whenever one is asked for
    • openssl pkcs12 -in username.p12 -out username.pem
    • openssl pkcs12 -in username.p12 -out username.crt -clcerts -nokeys
    • openssl pkcs12 -in username.p12 -out username.key -nocerts
  • Third, launch the VPN connection configurator in the network manager
    • Click on the 4-bars network icon in the upper right of your screen
  • Fourth, specify the following in the dialog
    • VPN Connections -> Import....
    • Find your .ovpn file
    • User Cert: (your .crt file)
    • CA Cert: (your .pem file)
    • Private Key: (your .key file)
    • Password: (that password IT gave you)
  • Finally, you should be ready to go
    • Click on old 4-bars, and choose your VPN.
    • Hope that it connects, cause otherwise I don't know what to tell you.

OnTechnology

What, a new blog?

OK, Peter, you haven't done an outstanding job of updating your existing blog. Why create a SECOND blog?

Its a different audience.

In the past year, I have spent a lot of time doing developer projects. Like my JSF whitepaper for Oracle. And all of my Collaborate 2009 presentations. Etc.

But you probably don't know about them because I never blogged about them on my OnDemand blog. Why? Because that blog's audience is probably focused on Cloud Computing and SaaS, and not Java server development. I didn't what to cross the streams, so to speak, so I just never blogged about other technical topics.

But I would like to blog about technical topics NOT related to Cloud. Thus this new OnTechnology blog. This blog will not attempt to create a coherent set of entries - I expect it to be rather random. Like - "Here's how to setup printing on Ubuntu". Or, "I just read this Semantic Web book, this is what I think..." I don't expect many people to subscribe - I think the main consumer will be the Google Search Engine, and people who waltz in from there.

Here it goes...