Exposing JMX attributes and operations in Java

Something I’m working on currently in my spare time requires me to expose attributes and operations over JMX programmatically without the use of Spring. And I jumped at the opportunity to do a quick post on how to do so.

The general steps are as follows.

  • Write an interface.
  • Write an implementation.
  • Expose over JMX using JMX API.
  • View using JMX Client!

Below I provide a simple but complete example.

Write an interface.

Note the fact that the interface name has ‘MBean’ at the end. This isn’t essential but is a way of telling the JMX api that you are coding by convention. Though this isn’t essential. You can call it whatever you like but you’ll just have to use the JMX api in a slightly different way. Personally I prefer arbitrary naming.

[java]
package test;

public interface UserMBean {

public enum Mood {
HAPPY, SAD, INDIFFERENT
}

int getAge();

void setAge(int age);

String getName();

void setName(String name);

String getMood();

void makeSad();

}
[/java]

Write an implementation.

Note that the class name here is the same as the interface name but without ‘MBean’. Again this is coding by JMX convention but isn’t essential.

[java]
package test;

public class User implements UserMBean {
private int age;
private String name;
private Mood mood;

public User(String name, int age, Mood mood) {
this.name = name;
this.age = age;
this.mood = mood;
}

@Override
public String getName() {
return name;
}

@Override
public int getAge() {
return age;
}

@Override
public void setName(String name) {
this.name = name;
}

@Override
public void setAge(int age) {
this.age = age;
}

@Override
public String getMood() {
return mood.toString();
}

@Override
public void makeSad() {
mood = Mood.SAD;
}

}
[/java]

Expose over JMX by convention

Here we just pass the user to the jmx api. JMX checks that we are either following the coding convention or that we are passing the interface explicitly.

[java]
package test;

import java.lang.management.ManagementFactory;

import javax.management.MBeanServer;
import javax.management.ObjectName;

import test.UserMBean.Mood;

public class JmxExampleByConvention {

public static void main(String[] args) throws Exception {
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
ObjectName id = new ObjectName("name.dhruba.test:type=test1");
User user = new User("dhruba", 32, Mood.HAPPY);
server.registerMBean(user, id);
Thread.sleep(Long.MAX_VALUE);
}

}
[/java]

If you violate the naming convention you will get an exception like the one below.

[text]
Exception in thread "main" javax.management.NotCompliantMBeanException: MBean class test.DefaultUser does not implement DynamicMBean, neither follows the Standard MBean conventions (javax.management.NotCompliantMBeanException: Class test.DefaultUser is not a JMX compliant Standard MBean) nor the MXBean conventions (javax.management.NotCompliantMBeanException: test.DefaultUser: Class test.DefaultUser is not a JMX compliant MXBean)
at com.sun.jmx.mbeanserver.Introspector.checkCompliance(Introspector.java:160)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:305)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at test.JmxExampleByConvention.main(JmxExampleByConvention.java:16)
[/text]

Expose over JMX by configuration

Here is the use of the JMX api if we do not wish to use conventional naming for our classes and want to call our classes whatever we want. In this case we have to pass the interface explicitly to JMX.

[java]
package test;

import java.lang.management.ManagementFactory;

import javax.management.MBeanServer;
import javax.management.ObjectName;
import javax.management.StandardMBean;

import test.UserMBean.Mood;

public class JmxExampleByConfiguration {

public static void main(String[] args) throws Exception {
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
ObjectName id = new ObjectName("name.dhruba.test:type=test1");
User user = new User("dhruba", 32, Mood.HAPPY);
StandardMBean mbean = new StandardMBean(user, UserMBean.class);
server.registerMBean(mbean, id);
Thread.sleep(Long.MAX_VALUE);
}

}
[/java]

View using JMX Client

When you start a JMX client like JVisualVM or JConsole you should initially see some attributes.

Jmx attributes

You can then double click the value cells in JVisualVM to change them or invoke an operation in the Operations tab.

JMX operations

Having done so you’ll end up with new values in the Attributes tab.

JMX attributes

And that’s it. This trick is useful to expose attributes and operations over JMX without the use of Spring and without needing to make your classes Spring beans. Thanks for reading.

Advertisements

LMAX disruptor framework and whitepaper

This is really old news now as I’m very late in posting it but since I’m still coming across people who have remained blissfully unaware I thought this was worth re-iterating. If you haven’t come across this yet drop everything else and read about the LMAX Disruptor framework and the associated whitepaper titled Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads. There is also an associated (and rather dated now) infoq presentation titled How to Do 100K TPS at Less than 1ms Latency.

In the beginning there was a main thread of execution, then came two and then thousands. Once we had scaled to starvation with threads came SEDA and the concept of queues, hierarchical topologies of queues and lots of writers and readers operating on queues with threads now relegated to second class citizen status. For a while the industry rested in the assurance that it had achieved equilibrium with innovation on latency. Then – out of the blue LMAX happened. LMAX (London Multi Asset eXchange) are the highest performance financial exchange in the world.

Read the whitepaper to find out just how outdated conventional wisdom on concurrent queuing in Java actually is and how a lack of awareness of how your financial code performs end-to-end hardware to VM could be created bottlenecks for your platform. The essence of the disruptor framework is a strikingly simple concept but at the same time profound not only in its effectiveness in attaining its goal – reducing latency – but also in the extent to which it leverages knowledge of hardware and the java virtual machine that it runs on.

It proves wrong beyond doubt the rather outdated mindset that questions employing Java for financial low latency use cases. Ever since Java 5 and particularly Java 6 – the JVM has dwarfed the Java language in its importance, capabilities and scope and as a result utilising Java is now fundamentally synonymous with utilising the JVM which is what makes the language so compelling.

It isn’t about the code that you write. It’s about the code that’s interpreted and then runs natively. It is naive to consider only the language as many seem to be doing in the light of the imminent release of Java 7. It’s important to bear in mind that whilst language sugar is important if runtime matters to you then you’ll want to focus on: (1) the VM (2) writing wholly non-idiomatic Java and (3) opposing conventional wisdom at every level of abstraction every step of the way.

Performance pattern: Modulo and powers of two

The modulo operator is rare but does occur in certain vital use-cases in Java programming. I’ve been seeing it a lot in striping, segmenting, pipelining and circularity use cases lately. The normal and naive implementation is as below.

[java]
public static int modulo(int x, int y) {
return x % y;
}
[/java]

Recently I saw the following pipelining logic in quite a few places in the codebase on a fast runtime path. This essentially takes an incoming message and uses the following to resolve a queue to enqueue the message onto for later dequeuing by the next stage of the workflow.

[text]
int chosenPipeline = input.hashCode() % numberOfPipelines
[/text]

It is little known, however, that on most hardware the division operation and as a result the modulo operation can actually be quite expensive. You’ll notice that for example modulo is never used in hash functions for example. Have you ever asked why? The reason is that there is a far quicker alternative: bitwise AND. This does involve making a small compromise however in the inputs to the original problem. If we are willing to always supply y in the above method as a power of two we can do the following instead.

[java]
public static int moduloPowerOfTwo(int x, int powerOfTwoY) {
return x & (powerOfTwoY – 1);
}
[/java]

This is dramatically quicker. To give you some stats which you should be asking for at this point see the table below.

Iterations Modulo (ms) PowerOfTwoModulo (ms)
10*4*32 5 1
10*5*32 24 4
10*6*32 237 54
10*7*32 2348 549
10*8*32 30829 5504
10*9*32 320755 54947

You might be thinking at this point – if I’m expecting a power of two I should validate the input to make sure it is so. Well, that’s one viewpoint. The other is, if you’re supplying y or if it is statically configured at startup then you can make sure it is a power of two without taking the performance hit of a runtime check. But if you really want to check here’s how to do so.

[java]
public static boolean isPowerOfTwo(int i) {
return (i & (i – 1)) == 0;
}
[/java]

So the next time you’re writing something that falls into one of the above use cases or any other for the modulo operator and your method needs to be fast at runtime for one reason or another consider the faster alternative. Certainly for price streaming (which is what I’m doing) latency matters! It would be interesting to check whether the java compiler actually makes this optimisation by substitution for you automatically. If so one can stick with the slower alternative for better readability.

The intelligent reader might say that in any such typical modulo use case the use of a bounded data structure and the resulting contention will far outweigh the costs of the modulo operation and the reader would be right in saying so but that’s another problem space entirely that I intend to explore separately. In short – there’s no need to be limited by a bounded data structure 🙂

Oracle celebrates upcoming Java 7 release on video

Oracle recently celebrated the upcoming release of Java 7 with great pomp and show and subsequently made recordings of the event available as a series of videos available. If you haven’t already done so watch the videos in order below and read the blog post. There are also some thoughts on what’s upcoming in Java 8 in the final Q&A video.

It’s great to see Oracle engaging with the community to this extent and so publicly. This could have been just another release but I’m glad it received more publicity and visibility in this way, particularly, giving sub-project leads within Java 7 the recognition they deserve and the inspiration to carry on doing their great work I hope. I’ve also subscribed to the Oracle Java Magazine to see what it offers in due time.

Introducing Java 7: Moving Java Forward

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

Technical breakout sessions

In addition to the main presentation there were also smaller and more specialised technical breakout sessions as below.

Making Heads and Tails of Project Coin, Small Language Changes in JDK 7 (slides)

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

Divide and Conquer Parallelism with the Fork/Join Framework (slides)

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

The New File System API in JDK 7 (slides)

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

A Renaissance VM: One Platform, Many Languages (slides)

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

Meet the Experts: Q&A and Panel Discussion

http://c.brightcove.com/services/viewer/federated_f9?isVid=1

Thoughts

A few thoughts that occurred to me having watched the above presentations follow below.

  • In Joe’s presentation I realised just how important good editor support is to prompt developers to adopt the project coin proposals over older ways of achieving the same ends. I was very impressed watching Netbeans detecting older syntax, prompting the developer through providing helpful warnings and being able to change old to new syntax instantaneously. I really hope Eclipse does the same. Eclipse has asked for quick fix, refactoring and template suggestions and in response to that I would say the most important incorporations above supporting the language would be supporting idiomatic transitions from Java 6 and Java 7.
  • Watching Joe Darcy go through how they implemented switch on strings and the associated performance considerations was fascinating. They actually use the hashcode values of strings to generate offsets and then use the offsets to execute the logic in the original case statements.
  • I found it very cool that Stuart Marks actually retrofitted the existing JDK code to utilise some of the Project Coin features not by hand but in an automated fashion. Apparently the JDK team also used annotation based processing and netbeans based tooling to help them upgrade the JDK codebase to use the new features.

Java 7 release candidate 1 released

Java 7 release candidate 1 has been released. Those people who thought Java 7 final would be released today (7th) – that is not the case as I mentioned in my previous post. It is simply being launched from a marketing standpoint. It will in fact be released as announced originally on 28 July. The real question now is how long before it gets to production for all of us. Common banks. Show a little courage and adopt early.

Depth and breadth first tree traversal

A friend of mine mentioned depth and breadth first tree traversal today and since I didn’t have a post on this already I thought this would be a good opportunity to do one and post my take on it. This focuses more on depth and breadth first tree traversal as opposed to graph search and whereas the algorithm is identical to how you’d expect an iterative dfs/bfs traversal to be there are a couple of small differences in the way I’ve done it here which may be serve as useful tips.

Node

The first thing we need is the classic Node class which has an identity and children.

[java]
package name.dhruba.kb.algos.dfsbfs;

class Node {

final String name;
final Node[] children;

Node(String name) {
this.name = name;
this.children = new Node[0];
}

Node(String name, Node[] children) {
this.name = name;
this.children = children;
}

boolean hasChildren() {
return children != null && children.length > 0;
}

@Override
public String toString() {
return name;
}

}
[/java]

Depth and breadth first traversal

Now here is the depth and breadth first traversal algorithm. Observations on the code follow underneath.

[java]
package name.dhruba.kb.algos.dfsbfs;

import java.util.ArrayDeque;
import java.util.Deque;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

class DfsBfsTraverser {

static final Logger logger = LoggerFactory.getLogger(DfsBfsTraverser.class);

enum TraversalType {
DEPTH_FIRST, BREADTH_FIRST;
}

interface NodeProcessor {
void process(Node node);
}

static void traverse(Node root, TraversalType traversalType,
NodeProcessor processor) {

if (root == null) {
return;
}

if (!root.hasChildren()) {
processor.process(root);
return;
}

Deque<Node> deck = new ArrayDeque<Node>();

addToDeck(deck, traversalType, root);

while (!deck.isEmpty()) {

Node current = deck.removeFirst();

if (current.hasChildren()) {
for (Node child : current.children) {
addToDeck(deck, traversalType, child);
}
}

try {
processor.process(current);
} catch (Exception e) {
logger.error("error processing node", e);
}

}

}

static void addToDeck(Deque<Node> deck, TraversalType traversalType,
Node node) {
if (traversalType == TraversalType.DEPTH_FIRST) {
deck.addFirst(node);
} else {
deck.addLast(node);
}
}

}[/java]

Observations

  • Note that the iterative algorithm implementation for depth and breadth first traversal are so similar that there’s no need to do a separate one for each. This class has quite easily been modified very subtly to accommodate both.
  • Note the use of ArrayDeque as a highly efficient stack confined deque. This allows us to add both to the front and back depending on the traversal type. It is also important to note that this class is likely to be faster than Stack when used as a stack, and faster than LinkedList when used as a queue as mentioned in its javadocs.
  • Note the initial elimination cases where we can get away with doing the bare minimum.
  • And finally note the use of a callback which allows the caller to do as they wish. This may seem like a small measure now but in the next post on this subject I’ll go into how to use the callback to allow the caller to dictate how far and in which direction they want to traverse based on the return value of the callback as well as how to allow the caller to terminate the traversal when they’ve found what they’re looking for. Such functionality can be critrical to good performance.

Testing

Take the following tree as an example.

       a
     /
   b      c
 /      /
e     f g    h

DFS returns: [a, c, h, g, b, f, e]
BFS returns: [a, b, c, e, f, g, h]

Note that DFS progresses from right to left due to the order in which children are added and removed from the deque.

Thanks for reading.

Java SE 7 API Javadocs receive new colour scheme

The Java SE 7 API specification (colloquially known as javadocs) have received a stylistic facelift. Compare v7 with v6. What do you think? Do you think it’s an improvement? Do you think it was even necessary? My opinion is twofold.

Firstly, although overall the javadocs appear to look nicer and more professional and corporate (as opposed to academic), when it comes to the method specifications, they aren’t as visually prominent as they were before due to the styles and colours overwhelming the text. In this respect both the styles and the colour of the text are subtle making them more difficult to tell apart. It’s not as clear as bold blue on white as it was before. This means that the reader will probably have to start reading the text to tell methods apart instead of just glancing at the visual image which was previously quite striking. A friend of mine also mentioned that there was just too much going on in terms of boxes too and I can see what he means.

Secondly and this is the more important point – if at all they had decided to spend time on enhancing the javadocs what required the most attention was in fact the navigability and searchability. The activity that takes up most of my time when using javadocs is finding the pieces of information i’m interested in. Better indexing and type ahead retrieval for classes, packages, properties and methods would be immensely useful rather than relying on the browser multiframe search which can fail at times by searching in the wrong frame. And before anyone mentions this I’m aware that there are third party sites which do such indexing and retrieval but I want this officially. So, that’s my 2p Oracle. I appreciate the time you’ve put in to the Javadocs but there’s much more room for improvement. Being a purist I really feel that, with this, it is more about content and usability than it is about appearance.

P.S. I think the new Google dark colour scheme and navigation bar is absolutely horrid. I want the old google back! 😦