Tuesday, July 29, 2014

Starting an ActiveMQ Project with Maven and Eclipse

I'm currently researching and prototyping a new subproject under the RHQ umbrella that will be a subsystem that can perform the emission and storage of audit messages (we're tentatively calling it "rhq-audit").

I decided to start the prototype with ActiveMQ. But one problem I had was I could not find a "starter" project that used ActiveMQ. I was looking for something with basic, skeleton Maven poms and Eclipse project files and some stub code that I could take and begin fleshing out to build a prototype. So I decided to publish my basic prototype to fill that void. If you are looking to start an ActiveMQ project, or just want to play with ActiveMQ and want a simple project to experiment with, then this might be a good starting point for you. This is specifically using ActiveMQ 5.10.

The code is in Github located at https://github.com/jmazzitelli/activemq-start

Once you clone it, you can run "mvn install" to compile everything and run the unit tests. Each maven module has an associated Eclipse project and can be directly imported into Eclipse as-is. If you have the Eclipse M2E plugin, these can be imported using that Eclipse Maven integration.

Here's a quick overview of the Maven modules and a quick description of some of the major parts of the code:

  • /pom.xml
    • This is the root Maven module's pom. The name of this parent module is rhq-audit-parent and is the container for all child modules. This root pom.xml file contains the dependency information for the project (e.g. dependency versions and the repositories where they can be found) and identifies the child modules that are built for the entire project.
  • rhq-audit-common
    • This Maven module contains some core code that is to be shared across all other modules in the project. The main purpose of this module is to provide code that is shared between consumer and producer (specifically, the message types that will flow from sender to receiver).
      • AuditRecord.java is the main message type the prototype project plans to have its producers emit and its consumers listen for. It provides JSON encoding and decoding so it can be sent and received as JSON strings.
      • AuditRecordProcessor.java is an abstract superclass that will wrap producers and consumers. This provides basic functionality such as connecting to an ActiveMQ broker and creating JMS sessions and destinations.
  • rhq-audit-broker
    • This provides the ability to start an ActiveMQ broker. It has a main() method to allow you to run it on the command line, as well as the ability to instantiate it in your own Java code or unit tests.
      • EmbeddedBroker.java is the class that provides the functionality to embed an ActiveMQ broker in your JVM. It can be configured using either an ActiveMQ .properties configuration file or an ActiveMQ .xml configuration file.
  • rhq-audit-test-common
    • The thinking with this module is that there is probably going to be common test code that is going to be needed between producer and consumer. This module is to support this. The intent is for other Maven modules in this project to list this module as a dependency with a scope of "test". For example, some common code will be needed to start a broker in unit tests - including this module as a test dependency will give unit tests that common code.
  • rhq-audit-producer
    • This provides the producer-side functionality of the project. The intent here is to flesh out the API further. This will become rhq-audit's producer API.
      • AuditRecordProducer.java provides a simple API that allows a caller to connect the producer to the broker and send messages. The caller need not worry about working with the JMS API as that is taken care of under the covers.
  • rhq-audit-consumer
    • This provides the consumer-side functionality of the project. The intent here is to flesh out the client-side API further. This will become the rhq-audit's consumer API.
      • AuditRecordConsumer.java provides a simple API that allows a caller to connect the consumer to the broker and attach listeners so they can process incoming messages.
      • AuditRecordListener.java provides the abstract listener class that is to be extended in order to process received audit records. The idea here is that subclasses can process audit records in different ways - perhaps one can store the audit records in a backend data store, and another can log the audit messages in rsyslog.
      • AuditRecordConsumerTest.java provides a simple end-to-end unit test that uses the embedded broker to pass messages between a producer and consumer.
Taking a look at AuditRecordConsumerTest shows how this initial prototype can be tested and shows how audit records can be sent and received through an ActiveMQ broker:

1. Create and start the embedded broker:
VMEmbeddedBrokerWrapper broker = new VMEmbeddedBrokerWrapper();
String brokerURL = broker.getBrokerURL();
3. Connect the producer and consumer to the test broker:
producer = new AuditRecordProducer(brokerURL);
consumer = new AuditRecordConsumer(brokerURL);
2. Prepare to listen for audit record messages:
consumer.listen(Subsystem.MISCELLANEOUS, listener);
3. Produce audit record messages:
At this point, the messages are flowing and the test code will ensure that all the messages were received successfully and had the data expected.

A lot of the code in this prototype is generic enough to provide functionality for most messaging projects; but of course there are rhq-audit specific types such as AuditRecord involved. The idea is to now flesh out this generic prototype to further provide the requirements of the rhq-audit project. More on that will be discussed in the future. But for now, perhaps this could help others come up to speed quickly with an AcitveMQ project without having to start from scratch.

Tuesday, April 15, 2014

Completed Remote Agent Install

My previous blog post talked about work being done on implementing an enhancement request which asked for the ability to remotely install an RHQ Agent. That feature has been finished and checked into the master branch and will be in the next release.

I created a quick 11-minute demo showing the UI (which is slightly differently than what the prototype looked like) and demonstrates the install, start, stop, and uninstall capabilities of this new feature.

I can already think of at least two more enhancements that can be added to this in the future. One would be to support SSH keys rather than passwords (so you don't have to require passwords to make the remote SSH connection) and the other would be to allow the user to upload a custom rhq-agent-env.sh file so that file can be used to override the default agent environment (in other words, it would be used instead of the default rhq-agent-env.sh that comes with the agent distribution).

Thursday, March 20, 2014

Remote Install of JON Agent

A new feature request has been added to JBoss Operations Network. JON users will now be able to install agents on remote boxes from the UI as long as the remote box is accessible via SSH.

All you need is the hostname of the machine you want to install the agent on, its SSH port that it's listening to (default is 22) and the credentials of the user who will install (and run) the JON agent.

You can install, start, and stop the JON agent from this UI mechanism. You can also use it to get the status of any JON agent that might be installed as well and even attempt to find if and where an agent may be installed on the remote box.

Here's a snapshot of the UI page after I just successfully remote installed a JON agent:

Thursday, September 12, 2013

Availability Updates in RHQ GUI

In older versions, the RHQ GUI showed you the availability status of resources but if you were viewing the resource in the GUI, it did not update the icons unless you manually refreshed the screen.

In RHQ 4.9, this has changed. If you are currently viewing a resource and its availability status changes (say, it goes down, or it comes back up), the screen will quickly reflect the new availability status by changing the availabilty icon and by changing the tree node icons.

To see what I mean, take a look at this quick 3-minute demo to see the feature in action (view this in full-screen mode if you want to get a better look at the icons and tree node badges):

Wednesday, September 11, 2013

Fine-Grained Security Permissions In Bundle Provisioning

RHQ allows one to bundle up content and provision that bundle to remote machines managed by RHQ Agents. This is what we call the "Bundle" subsystem, the documentation actually titles it the "Provisioning" subsystem. I've blogged about it here and here if you want to read more about it.

RHQ 4.9 has just been released and with it comes a new feature in the Bundle subsystem. RHQ can now allow your admins to give users fine-grained security constraints around the Bundle subsystem.

In the older RHQ versions, it was an all-or-nothing prospect - a user either could do nothing with respect to bundles or could do everything.

Now, users can be granted certain permissions surrounding bundle functionality. For example, a user could be given the permission to create and delete bundles, but that user could be denied permission to deploy those bundles anywhere. A user could be restriced in such a way to allow him to deploy bundles only to a certain group of resources but not others.

Along with the new permissions, RHQ has now introduced the concept of "bundle groups." Now you can organize your bundles into separate groups, while providing security constraints around those bundles so only a select set of users can access, manipulate, and deploy bundles in certain bundle groups.

If you want all the gory details, you can read the wiki documentation on this new security model for bundles.

I put together a quick, 15-minute demo that illustrates this fine-grained security model. It demonstrates the use of the bundle permissions to implement a typical use-case that demarcates workflows to provision different applications to different environments:

Watch the demo to see how this can be done. The demo will illustrate how the user "HR Developer" will only be allowed to create bundles and put them in the "HR Applications" bundle group and the user "HR Deployer" will only be allowed to deploy those "HR Applications" bundles to the "HR Environment" resource group.

Again, read the wiki for more information. The RHQ 4.9 release notes also has information you'll want to read about this.

Monday, August 12, 2013

Moving from Eclipse to IntelliJ

Well, the second shoe dropped. The final straw was placed on the camel's back and the camel's back broke. I tried one more time and, once again, Eclipse still doesn't have a good Maven integration - at least for such a large project as RHQ.

Now, for some history, I've been using Eclipse for at least a decade. I like it. I know it. I'm comfortable with it. While I can't claim to know how to use everything in it, I can navigate around it pretty good and can pump out some code using it.

However, the Maven integration is just really bad from my experience. I've tried, I really have. In fact, it has been an annual ritual of mine to install the latest Maven plugin and see if it finally "just works" for me. I've done this for at least the last three years if not longer. So it is not without a lack of trying. Every year I keep hearing "try it again, it got better." (I really have heard this over the span of years). But every time I install it and load in the RHQ project, it doesn't "just work". I tried it again a few weeks ago and nothing has changed. What I expect is to import my root Maven module and have Eclipse load it in and let me just go back to doing my work. Alas, it has never worked.

I hate to leave Eclipse because, like I said, I have at least a decade invested in using it. But I need a good Maven integration. I don't want to have tons of Eclipse projects in my workspace - but then again, if the Eclipse Maven plugin needs to create one project per Maven module so it "just works", so be it. I can deal with it (after all, IntelliJ has tons of modules, even if it places them under one main project). But I can't even get that far.

So, after hearing all the IntelliJ fanboys denigrate Eclipse and tell me that I should move to IntelliJ because "it's better", I finally decided to at least try it.

Well, I can at least report that IntelliJ's Maven integration actually does seem to "just work" - but that isn't to say I didn't have to spend 15 minutes or so figuring out some things to get it to work (I had to make sure I imported it properly and I had to make sure to set some options). But spending 15 minutes and getting it to work is by far better than what I've gone through with Eclipse (which is, spending lots more time and never getting it to work over the years). So, yes, I can confirm that the IntelliJ folks are correct that Maven integration "just works" - with that small caveat. It actually is very nice.

In addition, I really like IntelliJ's git integration - it works out of box and has some really nice features.

I also found that IntelliJ provides an Eclipse keymap - so, while I may not like all the keystrokes required to unlock all the features in IntelliJ (more on that below), I do like how I can use many of the Eclipse keystrokes I know and have it work in IntelliJ.

As I was typing up this blog, I was about to rail on IntelliJ about its "auto-save" feature. Reading their Migration FAQ they make it sound like you can't turn off that auto-save feature (where, as soon as you type, it saves the file). I really hate that feature. But, I just found out, to my surprise, you can kinda turn that off. It still maintains the changes though, in what I suppose is a cache of changed files. So if I close the editor with the changed file, and open it back up again, my changes are still there. That's kinda annoying (but yet, I can see this might be useful, too!). But at least it doesn't change the source file. I'll presume there is a way to throw away these cached changes - at least I can do a git revert and that appears to do it.

However, with all that said, as I use IntelliJ (and really, it's only been about week), I'm seeing on the edges of it things that I do not like where Eclipse is better. If you are an IntelliJ user and know how to do the following, feel free to point out my errors. Note: I'm using the community version of  IntelliJ v12.14.

For one thing, where's the Problems View that Eclipse has? I mean, in Eclipse, I have a single view with all the compile errors within my project. I do not see anywhere in IntelliJ a single view that tells me about problems project-wise. Now, I was told that this is because Eclipse has its own compiler and IntelliJ does not. That's an issue for me. I like being able to change some code in a class, and watch the Problems View report all the breakages that that change causes. I see in the Project view, you can limit the scope to problem files. That gets you kinda there - but I want to see it as a list (not a tree) and I want to see the error messages themselves,  not just what files have errors in them.

Second, the Run/Debug Configuration feature doesn't appear to be as nice as Eclipse. For example, I have some tool configurations in Eclipse that, when selected, prompt the user for parameter values, but apparently, IntelliJ doesn't support this. In fact, Eclipse supports lots of parameter replacement variables (${x}) whereas it doesn't look like IntelliJ supports any.

Third, one nice feature in Eclipse is the ability to have the source code for a particular method to popup in a small window when you hover over a method call while holding down, say, the ALT key (this is configurable in  Eclipse). But, I can't see how this is done in IntelliJ. I can see that View->QuickDefinition does what I want, but I just want to hold down, say, ALT or SHIFT and have the quick definition popup where I hover. I have a feeling you can tell IntelliJ to do this, I just don't know how.

Another thing I am missing is an equivalent to Eclipse's "scrapbook" feature. This was something I use(d) all the time. In any scrapbook page, you can add and highlight any Java snippet and execute it. The Console View shows the output of the Java snippet. This is an excellent way to quickly run some small code snippet you want to try out to make sure you go it right (I can't tell you how many times I've used it to test regex's). The only way it appears you can do this in IntelliJ is if you are debugging something and you are at a breakpoint. From there, you can execute random code snippets. But Eclipse has this too (the Display view). I want a way to run a Java snippet right from my editor without setting up a debug session.

I also don't want to see this "TODO" or "JetGradle" or other views that IntelliJ seems to insist I want. You can't remove them from the UI entirely.

Finally, IntelliJ seems to be really keen on keyboard control. I am one of those developers that hates relying on keystrokes to do things. I am using a GUI IDE, I want to use the GUI :-) I like mouse/menu control over keystrokes. I just can't remember all the many different key combinations to do things, plus my fingers can't consistently reach all the F# function keys, but I can usually remember where in the menu structure a feature is. I'm sure as I use IntelliJ more that I'll remember more. And most everything does seem to have a main menu or popup-menu equivalent. So, this is probably just a gripe that I have to spend time on a learning curve to learn a new tool - can't really blame IntelliJ for that (and with the Eclipse keymap, lots of Eclipse keystrokes now map in IntelliJ). I guess I have to blame Eclipse for that since it's forcing me to make this move in the first place.

Some of those are nit-picky, others not. And I'm sure I'll run into more things that either IntelliJ doesn't have or is hiding from me. Maybe as I use IntelliJ more, and my ignorance of it recedes a bit, I'll post another blog entry to indicate my progress.

Wednesday, May 8, 2013

Creating Https Connection Without javax.net.ssl.trustStore Property

Question: How can you use the simple Java API call java.net.URL.openConnection() to obtain a secure HTTP connection without having to set or use the global system property "javax.net.ssl.trustStore"? How can you make a secure HTTP connection and not even need a truststore?

I will show you how you can do both below.

First, some background. Java has a basic API to make a simple HTTP connection to any URL via URL.openConnection(). If your URL uses the "http" protocol, it is very simple to use this to make basic HTTP connections.

Problems creep in when you want a secure connection over SSL (via the "https" protocol). You can still use that API - URL.openConnection() will return a HttpsURLConnection if the URL uses the https protocol - however, you must ensure your JVM can find and access your truststore in order to authenticate the remote server's certificate.

[note: I won't discuss how you get your trusted certificates and how you put them in your truststore - I'll assume you know, or can find out, how to do this.]

You tell your JVM where your truststore is by setting the system property "javax.net.ssl.trustStore" and you tell your JVM how to access your truststore by giving your JVM the password via the system property "javax.net.ssl.trustStorePassword".

The problem is these are global settings (you often see instructions telling you to set these values via the -D command line arguments when starting your Java process) so everything running in your JVM must use that truststore. And you can't alter those system properties during runtime and expect those changes to take effect. Once you ask the JVM to make a secure connection, those system property values appear to be cached in the JVM and are used thereafter for the life of the JVM (I don't know exactly where in the JRE code these values are cached, but my experience shows me that they are). Changing those system properties later on in the lifetime of the JVM has no effect; the original values are forever used.

Another problem that some people run into is having the need for a truststore in the first place. Sometimes you don't have a requirement to authenticate the server endpoint; however, you would still like to send your data encrypted over the wire. You can't do this readily since the connection you obtain from URL.openConnection() will, by default, expect to use your truststore located at the path pointed to by the system property javax.net.ssl.trustStore.

To allow me to use different truststores for different connections, or to allow me to encrypt a connection but not authenticate the endpoint, I wrote a Java utility object that allows you to do just this.

The main constructor is this:

public SecureConnector(String secureSocketProtocol,
                       File   truststoreFile,
                       String truststorePassword,
                       String truststoreType,
                       String truststoreAlgorithm)

You pass it a secure socket protocol (such as "TLS") and your truststore file location. If the truststore file is null, the SecureConnector object will assume you do not want to authenticate the remote server endpoint and you only want to encrypt your over-the-wire traffic. If you do provide a truststore file, you need to provide its password, its type (e.g. "JKS"), and its algorithm (e.g. "SunX509") - if you pass in null for type and/or algorithm, the JVM defaults are used.

Once you create the object, just obtain a secure connection to any URL via a call to SecureConnector.openSecureConnection(URL). This expects your URL to have a protocol of "https". If successful, an HttpsURLConnection object is returned and you can use it like any other connection object. You do not need to set javax.net.ssl.trustStore (or any other javax.net.ssl system property) and, as explained above, you don't even need to provide a truststore at all (assuming you don't need to do any authentication).

The code for this is found inside of RHQ's agent - you can read its javadoc and look through SecureConnector code here.

The core code is found in openSecureConnection and looks like this, I'll break it down:

First, it simply obtains the HTTPS connection object from the URL itself:
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
Then it prepares a custom SSLContext object using the given secure socket protocol:
TrustManager[] trustManagers;
SSLContext sslContext = SSLContext.getInstance(getSecureSocketProtocol());
If no truststore file was provided, it will build its own "no-op" trust manager and "no-op" hostname verifier. What these "no-op" objects will do is always accept all certificates and hostnames thus they will always allow the SSL communications to flow. This is how the authentication is by-passed:
if (getTruststoreFile() == null) {
    // configured to not care about authenticating server, encrypt but don't worry about certificates
    trustManagers = new TrustManager[] { NO_OP_TRUST_MANAGER };
If a truststore file was provided, then it will be loaded in memory and stored in a KeyStore instance:
} else {
    // need to configure SSL connection with truststore so we can authenticate the server.
    // First, create a KeyStore, but load it with our truststore entries.
    KeyStore keyStore = KeyStore.getInstance(getTruststoreType());
    keyStore.load(new FileInputStream(getTruststoreFile()), getTruststorePassword().toCharArray());
The truststore file's content (now stored in a KeyStore object) is used to initialize a trust manager. Unlike the "no-op" trust manager that was created above (if a truststore file was not provided), this trust manager really does perform authentication and it uses the provided truststore's certificates to authorize the server being communicated with. This is why we no longer need to worry about the system properties "javax.net.ssl.trustStore" and "javax.net.ssl.trustStorePassword" - this builds its own trust manager using the data provided by the caller:
    // create truststore manager and initialize it with KeyStore we created with all truststore entries
    TrustManagerFactory tmf = TrustManagerFactory.getInstance(getTruststoreAlgorithm());
    trustManagers = tmf.getTrustManagers();
Finally, the SSL context is initialized with the trust manager that was created earlier (either the "no-op" trust manager, or the trust manager that was initialized with the truststore's certificates). That SSL context is handed off to the SSL connection so the connection can use the context when it needs to perform authentication:
sslContext.init(null, trustManagers, null);
The connection is finally returned to the caller, fully configured and ready to be used.
return connection;
This is helpful for certain use cases. First, it is helpful when you have multiple truststores that you need to choose from when connecting to different servers as well as being able to switch truststores at runtime (remember, the system property values of javax.net.ssl.trustStore, et. al. are fixed for the lifetime of the JVM - this helps bypass that restriction). This is also helpful in local testing, debugging and demo scenarios when you don't really need or care about setting up truststores and certificates but you do want to connect over https.