RHQ provides administration, monitoring, alerting, operational control and configuration in an enterprise setting with fine-grained security and an advanced extension model.
About the Project
|Getting Involved||If you wish to get involved as a contributor to RHQ, please visit #rhq channel on Freenode IRC and get to know people.|
|Developers||Our developers are always looking for the community to get involved. Whether it is ideas for improvement, documentation, contributed plugins or core development. Check the Contributions page on the RHQ wiki|
|Community||Our user mailing list and our developer mailing list are the main channels of communication between all community members. You can also join the team on IRC (#rhq on irc.freenode.net).|
|Knowledge||User docs and developer resources can be found on the RHQ wiki.|
|Project Status||RHQ uses the Red Hat Bugzilla issue tracker to organize and prioritize tasks. Development effort is done in RHQ Project which includes Jopr Project that is specific to JBoss technology management. |
RHQ Project | Open Issues | Source code GIT repository
|Professional Support||Red Hat delivers the enterprise Support, Consulting, and Training that you need whether you are testing a proof of concept, deploying a mission-critical application, or rolling out JBoss Middleware products across your enterprise. The JBoss Operations Network a fully supported enterprise product for monitoring and managing JBoss middleware products that is based on RHQ.|
RHQblogs about the RHQ project
- Alert definition templates in plugin(descriptor)s ?
- Aug 24, 2014 6:23 PM by Heiko Rupp
his is a more general question and not tied to a specific version of RHQ (but may become part of a future version of RHQ and/or RHQ-alerts).
Do you feel it would make sense to provide alert templates inside the plugin (descriptor) to allow the plugin writer to pre-define alert definitions for certain resource types / metrics? Plugin-writers know best what alert definitions and conditions make sense and should get the power to pre-define them.
This idea would probably work best with some relative metrics like disk is 90% full as opposed to absolute values that probably depend a lot more on concrete customer scenarios (e.g. heap usage over 64MB may be good for small installations, but not for large ones).
In the future with RHQ-alerts, it should also be possible to compare two metrics with each other, which will allow to say "if metric(disk usage) > 90% of metric(disk capacity) then ...".
I've scribbled the idea down in the Wintermute page on the RHQ wiki.
If you think this is useful, please respond here or on the wiki page. Best is if you could add a specific example.
- RHQ-Alerts aka Alerts 2.0 aka Wintermute
- Aug 14, 2014 5:58 AM by Heiko Rupp
The RHQ-team has been thinking about next generation Alerting for RHQ for a while.
I have written my thoughts down in a blog post in the RHQ space of the JBossThoughts on RHQ-Alerts aka Alerts 2.0 aka Wintermute.
community site in order to start the (design) process:
Please feel free to comment here or there :)
- Starting an ActiveMQ Project with Maven and Eclipse
- Jul 29, 2014 4:35 AM by John Mazz
- I'm currently researching and prototyping a new subproject under the RHQ umbrella that will be a subsystem that can perform the emission and storage of audit messages (we're tentatively calling it "rhq-audit").
I decided to start the prototype with ActiveMQ. But one problem I had was I could not find a "starter" project that used ActiveMQ. I was looking for something with basic, skeleton Maven poms and Eclipse project files and some stub code that I could take and begin fleshing out to build a prototype. So I decided to publish my basic prototype to fill that void. If you are looking to start an ActiveMQ project, or just want to play with ActiveMQ and want a simple project to experiment with, then this might be a good starting point for you. This is specifically using ActiveMQ 5.10.
The code is in Github located at https://github.com/jmazzitelli/activemq-start
Once you clone it, you can run "mvn install" to compile everything and run the unit tests. Each maven module has an associated Eclipse project and can be directly imported into Eclipse as-is. If you have the Eclipse M2E plugin, these can be imported using that Eclipse Maven integration.
Here's a quick overview of the Maven modules and a quick description of some of the major parts of the code:
Taking a look at AuditRecordConsumerTest shows how this initial prototype can be tested and shows how audit records can be sent and received through an ActiveMQ broker:1. Create and start the embedded broker:
- This is the root Maven module's pom. The name of this parent module is rhq-audit-parent and is the container for all child modules. This root pom.xml file contains the dependency information for the project (e.g. dependency versions and the repositories where they can be found) and identifies the child modules that are built for the entire project.
- This Maven module contains some core code that is to be shared across all other modules in the project. The main purpose of this module is to provide code that is shared between consumer and producer (specifically, the message types that will flow from sender to receiver).
- AuditRecord.java is the main message type the prototype project plans to have its producers emit and its consumers listen for. It provides JSON encoding and decoding so it can be sent and received as JSON strings.
- AuditRecordProcessor.java is an abstract superclass that will wrap producers and consumers. This provides basic functionality such as connecting to an ActiveMQ broker and creating JMS sessions and destinations.
- This provides the ability to start an ActiveMQ broker. It has a main() method to allow you to run it on the command line, as well as the ability to instantiate it in your own Java code or unit tests.
- The thinking with this module is that there is probably going to be common test code that is going to be needed between producer and consumer. This module is to support this. The intent is for other Maven modules in this project to list this module as a dependency with a scope of "test". For example, some common code will be needed to start a broker in unit tests - including this module as a test dependency will give unit tests that common code.
- TCPEmbeddedBrokerWrapper.java provides a simple ActiveMQ broker configured very simply and will use a free TCP port to listen to. Tests can have their consumers and producers communicate with this broker.
- VMEmbeddedBrokerWrapper.java provides a simple ActiveMQ that is only for intra-VM messaging. It too is very simply configured and is useful, again, for unit or integration tests that need a broker for message processing.
- EmbeddedBrokerTest.java exercises the above test brokers by using a test consumer and producer (which are also available for use by other Maven modules since they are also part of the common code provided by this rhq-audit-test-common module).
- This provides the producer-side functionality of the project. The intent here is to flesh out the API further. This will become rhq-audit's producer API.
- AuditRecordProducer.java provides a simple API that allows a caller to connect the producer to the broker and send messages. The caller need not worry about working with the JMS API as that is taken care of under the covers.
- This provides the consumer-side functionality of the project. The intent here is to flesh out the client-side API further. This will become the rhq-audit's consumer API.
- AuditRecordConsumer.java provides a simple API that allows a caller to connect the consumer to the broker and attach listeners so they can process incoming messages.
- AuditRecordListener.java provides the abstract listener class that is to be extended in order to process received audit records. The idea here is that subclasses can process audit records in different ways - perhaps one can store the audit records in a backend data store, and another can log the audit messages in rsyslog.
- AuditRecordConsumerTest.java provides a simple end-to-end unit test that uses the embedded broker to pass messages between a producer and consumer.
VMEmbeddedBrokerWrapper broker = new VMEmbeddedBrokerWrapper();3. Connect the producer and consumer to the test broker:
String brokerURL = broker.getBrokerURL();
producer = new AuditRecordProducer(brokerURL);
consumer = new AuditRecordConsumer(brokerURL);2. Prepare to listen for audit record messages:
consumer.listen(Subsystem.MISCELLANEOUS, listener);3. Produce audit record messages:
producer.sendAuditRecord(auditRecord);At this point, the messages are flowing and the test code will ensure that all the messages were received successfully and had the data expected.A lot of the code in this prototype is generic enough to provide functionality for most messaging projects; but of course there are rhq-audit specific types such as AuditRecord involved. The idea is to now flesh out this generic prototype to further provide the requirements of the rhq-audit project. More on that will be discussed in the future. But for now, perhaps this could help others come up to speed quickly with an AcitveMQ project without having to start from scratch.
- Changing the Endpoint Address of an RHQ Storage Node
- Jul 20, 2014 11:18 AM by John Sanda
- There is very limited support for changing the endpoint address of a storage node. In fact the only way to do so is by undeploying and redeploying the node with the new address. And in some cases, like when there is only a single storage node, this is not even an option. BZ 1103841 was opened to address this, and the changes will go into RHQ 4.13.
Changing the endpoint address of a Cassandra node is a routine maintenance operation. I am referring specifically to the address on which Cassandra uses for gossip. This address is specified by the listen_address property in cassandra.yaml. The key thing when changing the address is to ensure that the node's token assignments do not change. Rob Coli's post on changing a node's address provides a nice summary of the configuration changes involved.
With CASSANDRA-7356 however, things are even easier. Change the value of listen_address and restart Cassandra with the following system properties defined in cassandra-env.sh,
The seeds property in cassandra.yaml might need to be updated as well. Note that there is no need to worry about the auto_bootstrap, initial_token, or num_tokens properties.For the RHQ Storage Node, these system properties will be set in cassandra-jvm.properties. Users will be able to update a node's address either through the storage node admin UI or through the RHQ CLI. One interesting to note is that the RHQ Storage Node resource type uses the node's endpoint address as its resource key. This is not good. When the address changes, the agent will think it has discovered a new Storage Node resource. To prevent this we can add resource upgrade support in the rhq-storage plugin, and change the resource key to use the node's host ID which is a UUID that does not change. The host ID is exposed through the StorageServiceMBean.getLocalHostId JMX attribute.If you interested in learning more about the work involved with adding support for changing a storage node's endpoint address, check out the wiki design doc that I will be updating over the next several days.
- RHQ-Metrics and Grafana (updated)
- Jun 15, 2014 2:55 PM by Heiko Rupp
As you may know I am currently working on RHQ-Metrics, a time series database with some charting extensions that will be embeddable into Angular apps.
One of the goals of the project is also to make the database also available for other consumers that may have other input sources deployed as well as their own graphing.
Over the weekend I looked a bit at Grafana which looks quite nice, so I started to combine RHQ-metrics with Grafana.
I did not (immediately) find a description of the data format between Grafana and Graphite, but saw that InfluxDB is supported, so I fired up WireShark and was able to write an endpoint for RHQ-Metrics that supports listing of metrics and retrieving of data.
The overall setup looks like this:
gmondemits data over IP multicast, which is received with the protocol translator client, ptrans. This translates the data into requests that are pushed into the RHQ-metrics system and there stored in the Cassandra backend.
Grafana then talks to the RHQ-Metrics server and fetches the data to be displayed.
For this to work, I had to comment out the
conf.jsand use the
influxbackend like this:
As you can see, the URL points to the RHQ-Metrics server.
The code for the new handler is still "very rough" and thus not yet in the normal source repository. To use it, you can clone my version of the RHQ-Metrics repository and then runUpdate
start.shin the root directory to get the RHQ-Metrics server (with the built-in memory backend) running.
I have now added code (still in my repository) to support some of the aggregating functions of Influx like min, max, mean, count and sum. The following shows a chart that uses those functions:
- View more rhq