Implementing Jobs

From Obsidian Scheduler
Jump to: navigation, search

This information covers implementing jobs in Java. This includes how to write your own jobs, use parameterization and job result features, and how to set up your classpath to include your own job implementations. If you want to schedule execution of scripts, please see our Scripting Jobs topic.

We recommend you review this page fully before implementing your own jobs. Obsidian provides you features that are not available in other schedulers which greatly improve re-usability and help ensure reliable execution. Reviewing this page and considering all available features will help you make the best choices for your needs.

You can also look at examples in our convenience Built-in Jobs that have been open-sourced under the [MIT License] as of Obsidian 2.7.0. In the root of the installation folder, you can find the source in obsidian-builtin-job-src.jar.

In addition, you can check out our Javadoc which documents the features you'll need to write your own Obsidian jobs. We recommend you consult with the Javadoc in combination with this page to understand the best way to use Obsidian's job functionality.

SchedulableJob Interface

SchedulableJob Javadoc

Note: If you need to set up a development environment to create Obsidian jobs, see the Classpath section.

Implementing jobs in Obsidian is very straightforward for most cases. At its most basic, implementing a job simply requires implementing the SchedulableJob interface which has a single method, as shown below.

public interface SchedulableJob {
      public void execute(Context context) throws Exception;

In your implementation, the execute() method does any work required in the job and it can throw any type of Exception, which is handled automatically by Obsidian.

If you aren't using parameterization or saving job results, that's all you need to do. It's likely you'll just be calling some existing code through your job implementation. Here's an example:

import com.carfey.ops.job.Context;
import com.carfey.ops.job.SchedulableJob;
import com.carfey.ops.job.param.Description;

@Description("This helpful description will show in the job configuration screen.")
public class MyScheduledJob implements SchedulableJob {
	public void execute(Context context) throws Exception {
		CatalogExporter exporter = new CatalogExporter ();

All executed jobs are supplied a Context object (see Javadoc) is used to expose configuration parameters and job results.

You can also access the scheduled runtime of the job using com.carfey.jdk.lang.DateTime Context.getScheduledTime(). If you wish to convert this to another Date type, such as java.util.Date, you can use the getMillis() method which provides UTC time in milliseconds from the epoch:

Date runTime = new java.util.Date(context.getScheduledTime().getMillis());

Note: You can annotate your job with the com.carfey.ops.job.param.Description (see Javadoc) annotation to provide a helpful job description which is shown in the job configuration screen. This can be useful for indicating how a job should be configured.

As of Obsidian 4.3.0, descriptions support formatting for rendering in the UI.

Dependency Injection via Spring

Obsidian supports executing jobs wired as components via Spring. See our dedicated page on Spring Integration for full details.


Obsidian offers flexibility and reuse in your jobs by supplying configurable parameters for each job.

If you would like to parameterize jobs, you can define parameters on the job class itself, or use custom parameters which are only set when configuring a job. Defined parameters are automatically displayed in the Jobs screen to help guide configuration, but also to provide defaults and enforce data types and required values. Custom parameters can be set for any job, and lack additional validation.

Defined parameters are specified on the job class using the @Configuration annotation (see Javadoc).

The following example shows a job using various parameters. It includes a required url parameter has two valid values, an optional set of names for saving the results and a Boolean value to determine whether compression should be used. It shows a fairly comprehensive usage of various data types and other parameter settings.

import com.carfey.ops.job.param.Configuration;
import com.carfey.ops.job.param.Parameter;
import com.carfey.ops.job.param.Type;

		@Parameter(name="url", required=true, type=Type.STRING, listArgs={"",""}),
		@Parameter(name="saveResultsParam", required=false, allowMultiple=true, type=Type.STRING),
		@Parameter(name="compressResults", required=false, defaultValue="false", type=Type.BOOLEAN)
public class MyScheduledJob implements SchedulableJob {

As of Obsidian 4.3.0, Parameter descriptions support formatting for rendering in the UI.

If you are running parameterized jobs, these parameters are very easy to access. Both defined and custom parameters are accessed in the same way. Example:

public void execute(Context context) throws Exception {
	JobConfig config = context.getConfig();

	MyExistingFunction function = new MyExistingFunction();

	String url = config.getString("url");

        boolean compress = config.getBoolean("compressResults"); // defaults to false
        String result = function.go();
        for (String resultsName : config.getStringList("saveResultsParam")) {
             context.saveJobResult(resultsName, result);

For all the available methods on JobConfig, see the Javadoc.

The following is the @Parameter source code (see Javadoc), which helps illustrate attributes that can be configured:

public @interface Parameter {
	public String name();
	public boolean required();
        public boolean requiredAtRuntime(); // as of 3.7.0
	public Type type() default Type.STRING;
	public boolean allowMultiple() default false;
	public String defaultValue() default "";
        public Class<? extends ListProvider> listProvider() default StaticListProvider.class; // as of 3.3.0
	String[] listArgs() default {}; // as of 3.3.0
	public String description() default ""; // as of 4.0.2

As of Obsidian 4.0.2, a parameter can be associated with a description that is integrated with help information displayed in the user interface. This description is also returned in the API calls that return job parameter information.

As of Obsidian 3.7.0, a parameter can be defined as requiredAtRuntime. This allows the job to be configured without a parameter, but ensures a parameter value is set with one-time submissions. Of course, if it is configured with a parameter value, one-time submissions will not require a value.

List Parameterization

As of Obsidian 3.3.0, you can directly specify a list of valid values within a @Parameter annotation by using the listArgs option. The job screen will then present the values for these parameters as a selection list. Note that if required is set to false, an empty value will automatically be included in the list.

    @Parameter(name="logLevel", required=true, type=Type.STRING, listArgs={"ERROR", "INFO", "DEBUG"})

Custom Dynamic Lists

For more complex scenarios, you may wish to enumerate values dynamically. This can be done by creating your own implementation of the ListProvider interface, including it in the Obsidian classpath, and then referencing it in your @Parameter annotation via listProvider. The listArgs value can be used to provide arguments to your listProvider, since they are passed into it when enumerating valid values.

The example below demonstrates this using the built-in FileListProvider, which provides a listing of full file paths based on a directory configured in a global parameter specified via listArgs.

/** Enumerate all files in the directory specified by the global parameter "rootDirectory". **/
    @Parameter(name="file", required=true, type=Type.STRING, listProvider=com.carfey.ops.job.param.FileListProvider.class, listArgs={"rootDirectory"})

Dynamic File Lists

As of Obsidian 3.3.0, if you wish to define a parameter which enumerates a file listing based on a server-side directory, you can use the built-in FileListProvider.

This allows you to enumerate files in a server-side directory which is configured in a named global parameter. To use this feature, specify the appropriate listProvider class along with at least one value for listArgs to specify the global parameter name which will contain the configured directory. When the job is configured, Obsidian will enumerate valid values from the directory configured in the global parameter. At execution, the configured value will also be checked to ensure it is a valid value based on the current directory listing.

   @Parameter(name="fileToProcess", type=Type.STRING, listArgs={"sourceDirectory"}, listProvider=FileListProvider.class, required = false),
   @Parameter(name="logTarget", type=Type.STRING, listArgs={"logDirectory", "false", ".*log", "true"}, listProvider=FileListProvider.class, required = false)

As shown in the logTarget parameter, FileListProvider supports additional arguments. See the Javadoc for full usage details.


By default, all @Configuration annotations on the job class hierarchy are inherited by children and their parameters are combined. However, if a subclass defines a parameter with the same name as a parent class, the subclass version will override the parent version.

As of Obsidian 3.0, @Configuration added a replaceInherited attribute. If set to true, parent classes' @Configuration annotations are completely ignored, effectively replacing their parameter definitions completely.

Global Parameters

Obsidian 2.5 introduced Global Parameters. These let you configure job parameters globally, and then simply import them into jobs as needed. Global parameters help avoid repeating the same configuration steps over and over, and can even be used to hide sensitive values from users, since they have separate access control in the admin web application.

By default, if a job parameter is configured with a value that is surrounded by double curly braces (e.g. {{param}}), then it is treated as a global parameter reference. When Obsidian sees a global parameter reference in this format during job execution, it imports all configured global parameters under the name (e.g. param) in place of the reference. Note that Obsidian does not support global parameter references embedded inside parameter values, since it does not perform text substitution - only parameter values containing only the global parameter reference will be replaced with the global parameter value.

Obsidian will perform automatic type conversion for all values - a global parameter's type definition doesn't have to match the type of the defined parameter that references it. Once Obsidian has resolved all global parameter values, it will validate them to ensure all defined parameter restrictions are respected. Note that Obsidian strictly enforces that a global parameter must exist when referenced.

Note that you can configure a job parameter with multiple global parameter references along with normal values, and Obsidian will combine them all into the configuration passed into your job.

The Global Parameters page explains how to configure global parameters.

Note: If you wish to change the tokens used to surround global parameters, you may override them using properties outlined in Advanced Configuration.

Global Substitution Mode

Available as of Obsidian 3.4.0

In some cases, you may wish to embed global parameters inside other parameters, rather than substitute them entirely. For example, when using a ScriptFileJob, you may wish to inject a global parameter value into an argument passed into a script as follows:


To enable this, update the scheduler setting useGlobalSubstitutions to true. Note that this changes the behaviour of all global parameter references to use plain text substitution.

After enabling this setting, you may reference any number of global parameters inside job parameters using the normal curly brace syntax (e.g. {{globalParamName}}), and they may occur anywhere in the parameter value.

Important Note: Changing this setting may impact existing jobs since global substitutions use the first configured global parameter value to perform text substitution, while the normal behaviour expands global parameter references to use all configured values. In addition, if any job parameters contain text within doubled-up curly braces, Obsidian will interpret these as global parameter references and will fail job validation if they do not exist.

Ad Hoc & One-Time Run Parameters

In addition to defining parameters for at the job level, Obsidian supports accepting parameters for a specific run time (i.e. job history) through the Jobs screen, or via the REST or Embedded APIs. If a parameter name for a run parameter has the same name as a configured job parameter, the job parameter values are dropped, and the run parameter values are used instead.

These parameters are treated the same as those at the job level, and are exposed to the job in the same manner as parameters at the job level. Note that parameters must have the same data type as any already configured for the job, and must conform to restrictions defined by the @Configuration annotation if applicable.

Config Validating Job

ConfigValidatingJob Javadoc

In addition to providing simple validation mechanisms through the @Parameter annotation, Obsidian gives you a way to add custom parameter validation to a job.

The interface com.carfey.ops.job.ConfigValidatingJob extends SchedulableJob and allows you to provide additional parameter validation that goes beyond type validity and mandatory values. Below is its definition:

public interface ConfigValidatingJob extends SchedulableJob {	

	public void validateConfig(JobConfig config) throws ValidationException, ParameterException;


When a job implementing this interface is configured or executed, the validateConfig() method is called. All configured parameters are available in the same JobConfig object that is provided to the execute() method. You can perform any validation you require within this method. If validation fails, the job will not be created, modified or executed (depending on when validation fails), and the messages you added to the ValidationException are displayed to the user. Consider this example:

public void validateConfig(JobConfig config) throws ValidationException, ParameterException {
	List<String> hosts = config.getStringList("hosts");
	ValidationException ve = new ValidationException();
	if (hosts.size() < 2) {
		ve.add("Host syncronization job requires at least two hosts to synchronize.");
	int timeout = config.getInt("timeout");
	if (timeout < 0) {
		ve.add(String.format("Timeout must be 0 indicating no timeout or greater than 0 to indicate timeout duration.  Timeout provided was %s.", timeout));
	if (!ve.getMessages().isEmpty()) {
		throw ve;

Validation on Non-Scheduler Instances

If you configure a ConfigValidatingJob on a non-scheduler web application which does not have the job classpath available, Obsidian is forced to skip calling the corresponding validation method when the job is saved, but it will still do so during execution.

Job Results

Obsidian also allows for storing information about your job execution. This information is then available in chained and resubmitted jobs. In addition, as of release 1.4, jobs can be conditionally chained based on the saved results of a completed trigger job.

Job Results can be viewed after a job completes in the Job Activity screen. They are also exposed in the Obsidian REST API.

Note this example that both evaluates source job information (i.e. job results saved by the job that chained to this one) and saves state from its own execution which could be used by a subsequently chained job:

public void execute(Context context) throws Exception {
	Map<String, List<Object>> sourceJobResults = context.getSourceJobResults();
        // Grab results from the source job that was chained to this one
        List<Object> oldResultsList = sourceJobResults.get("inputFile");
	String oldResults = (String) oldResultsList.get(0);

	... job execution ...

        // This saved value is then available to chained jobs and can be viewed in the UI
	context.saveJobResult(resultsParamName, oldResults + " Updated");

        // As of 2.2, you can save multiple results at a time as a convenience.
	context.saveMultipleJobResults("file", Arrays.asList("first", "second"));

        // As of 3.6, you can replace job results.
	context.replaceJobResult(resultsParamName, "replace old value");
        context.replaceMultipleJobResults("file", Arrays.asList("third", "fourth"));


The Context object (see Javadoc) methods used for retrieving and storing results are:

  • java.util.Map<java.lang.String,java.util.List<java.lang.Object>> getSourceJobResults()
  • void saveJobResult(java.lang.String name, java.lang.Object value)
  • void saveMultipleJobResults(java.lang.String name, Collection<?> values) (from 2.2 onward)
  • void replaceJobResult(java.lang.String name, java.lang.Object value) (from 3.6 onward)
  • void replaceMultipleJobResults(java.lang.String name, Collection<?> values) (from 3.6 onward)

Note that getSourceJobResults() will return job results saved by the job that was chained directly to the currently executing job. If multiple jobs are chained in sequence, this method will not return results from every job in the chain. If you wish to pass all results down the chain, you can invoke saveMultipleJobResults() within each job, using the values from getSourceJobResults().

Supported Job Result Types

Though the job result methods accept java.lang.Object, there are limitations to what types Obsidian can store as a job result:

  • Basic java.lang types such as Boolean, String and subclasses of Number are supported automatically.
  • For other types, job result values are stored using the object's toString() representation, and re-constructed when by invoking a constructor with a single String argument with that stored text value. This constructor must exist and be public, and be able to re-build the instance from the stored String representation.
  • Possible implementation approaches to storing complex objects include simply storing them as a String and re-building them manually, returning JSON from toString() and re-constituting the object from JSON in the single-String constructor, using Base-64 encoded Java serialization in toString() and the constructor, etc.

Annotation-Based Jobs

Schedulable Javadoc

While Obsidian offers a simple Java interface for creating new jobs, Obsidian also provides a way to use annotations to make an arbitrary Java class executable.

com.carfey.ops.job.SchedulableJob.Schedulable is a class-level marker annotation indicating that methods are annotated for scheduled execution. Adding this annotation allows you to configure a job in the Obsidian web app or REST API despite the class not implementing SchedulableJob.

com.carfey.ops.job.SchedulableJob.ScheduledRun is a method-level annotation to indicate one or more methods to execute at runtime. It has an int executionOrder() method that defaults to 0. This value indicates the order in which to execute methods. Duplication of execution order is not permitted. Annotated methods must have no arguments and must be public.

Note: Using these annotations precludes you from storing job results or parameterizing your job.

Interruptable Jobs

InterruptableJob Javadoc

InterruptableContextJob Javadoc (since 3.2)

As of Obsidian 1.5.1, it is possible to terminate a running job on a best effort basis. As of Obsidian 3.6.0, Forked Jobs can also be interrupted.

In some exceptional cases, it may be necessary or desirable to force termination of a job. Since exposing this functionality for all jobs could result in unexpected and even dangerous results, Obsidian provides two Java interfaces that are used specifically for this function.

The interfaces InterruptableJob and InterruptableContextJob extend SchedulableJob and flag a job as interruptable. Technically speaking, this means that the main job thread will be interrupted by Thread.interrupt(), when an interrupt request is received via the UI or REST API.

Both interfaces mandate implementation of a void beforeInterrupt() method, with the InterruptableContextJob version supplying the job's Context object, which contains the interrupting user through getInterruptUser(). This method allows for you to perform house-cleaning before Obsidian interrupts the job thread. For example, you may have additional threads to shut down, or other resources to release. You may also want to set a flag on the job instance to indicate to the executing thread that it should shut down, rather than rely on checking Thread.isInterrupted(). You should attempt to have your beforeInterrupt() execute in a timely fashion, though it will not interrupt other job scheduling/execution functionality if it takes some time.

It is possible that the job completes either successfully or with failure before the interrupt can proceed. If the interrupt proceeds, the job will be marked as Error and the interruption details will be made available for review in both the Job Activity and Log views.

Note: After invoking void beforeInterrupt(), Obsidian will invoke Thread.interrupt() to try to get the job to abort. Note that Thread.interrupt() does not forcibly terminate a thread in most cases, and it is up to the job itself to support aborting at an appropriate time when an interrupt is received. This tutorial explains the details of thread interrupts.

Classpath for Building and Deploying

To implement jobs in Java, you will need to reference Obsidian base classes in your Java project.

To deploy and run jobs in Obsidian, your built code and any 3rd party libraries it requires must be included in the Obsidian classpath. If you're running the scheduler within a servlet container (e.g. Tomcat), JARs should be placed under /WEB-INF/lib. This could be a plain installation of Obsidian, or a web app which contains both your application and Obsidian. If you are running the standalone scheduler, you'll need to place your JARS under the /standalone directory. Otherwise, if you are running an embedded Obsidian instance, you'd need to ensure the jobs are available in the classpath in one form or another.

If you are running a standalone web application that does not have a scheduler running, you do not have to update its classpath with your compiled jobs, unless you are running a version older than 2.6. To be able to configure jobs in your standalone web application, your Obsidian instances which run jobs will need to have been started at least once after your latest classpath changes, and after configuring classpath scanning. This is because scheduler instances store job metadata in the Obsidian database so the admin web application can properly validate jobs.

The base libraries to build Java jobs are found in the zip file you downloaded under the /standalone directory:

  • obsidian.jar

Prior to Obsidian 2.1.1, the following libraries were also included.

  • jdk.jar
  • jdk-gen.jar
  • suite.jar
  • suite-gen.jar
  • obsidian-gen.jar
  • carfey-date-1.2.jar or carfey-date-1.1.jar

These libraries should not conflict with your existing build classpath since they are internal Obsidian libraries.

To build a custom WAR, you can use the provided WAR artifacts in the Obsidian zip package you downloaded, and customize it in your desired build technology (e.g. Ant, Maven, Gradle, etc.).

Maven users: Note that we do not publish Maven artifacts for Obsidian, so you will not be able to include them by referencing a public repository.

JVM Forking

Obsidian 3.0 introduced Job Forking which runs each job in its own JVM instance which is started for each execution. This enables hot-swapping of JARs so that jobs can be updated without restarts. By default, this feature works on standalone instances, but other modes can be supported with minor customization.

Classpath Scanning

Obsidian supports classpath scanning to find your jobs for display in the job edit screen.

All classes that implement com.carfey.ops.job.SchedulableJob or use the com.carfey.ops.job.SchedulableJob.Schedulable and com.carfey.ops.job.SchedulableJob.ScheduledRun annotations will be included, provided they are on the classpath.

To configure classpath scanning, you must specify one or more package prefixes via scheduler settings. Select the "Job" category, and locate the "packageScannerPrefix" parameter. Specify your comma delimited list of package prefixes and save your changes.

Note: The prefixes should be as specific as possible to reduce memory overhead. For example, if all your jobs are under, use the prefix "" rather than "com.example".

PackageScannerPrefix 4.0.png

As of 3.0.1, you can also configure how often Obsidian will check "packageScannerPrefix" for changes, which results in a re-scan of available jobs. This is done via the "classpathScanFrequency" parameter in the "Job" category in scheduler settings.

If you are using Spring and wish to integrate Obsidian and Spring, you will likely not need to use this distinct classpath scanning functionality, since jobs found in the Spring context will be available in the job edit screen automatically.

Initializing Jobs on Startup

If you're interested in initializing your jobs into Obsidian on startup without having to write and execute code or manually configure them using the UI, you can use the Initializing and Restoring functionality available as of Obsidian 3.0.0.

Best Practices

Obsidian's many features give you multiple ways to solve the same problem, but here are some tips to guide your implementation:

  • Use parameters to promote reuse in your jobs by making them more generic - this helps avoid builds just for configuration changes. Defined parameters are especially useful to enforce constraints on configuration.
  • Use class inheritance when writing your SchedulableJob classes to share common functionality between different jobs.
  • Use job results for use in chained jobs. For example, you can chain to a generic archive or FTP transfer job which uses source job results to know what to send.
  • Use global parameters when you are referencing configuration that many jobs require (e.g. database connection info or shared file paths).
  • Avoid catching and not rethrowing exceptions when you want Obsidian to recognize it as a job failure. Obsidian relies on seeing a thrown exception to record job failures.
  • Use script jobs to write simple jobs that are used for maintenance or simple tasks, but stick to compiled SchedulableJob classes for critical jobs or performance-sensitive production code.
  • Contact us if you want suggestions on how to implement your jobs. Our team is happy to help guide you on the right path.