środa, 2 sierpnia 2017

Managed KIE Server gets ready for the cloud

As described in this article, KIE Server can run in two modes:

  • managed, wit controller that is responsible for providing kie containers to be deployed
  • unmanaged, self contained server that allows to deploy kie containers manually 
In this article, I'd like to focus on managed mode and show some improvements in that area that will make managed KIE Server ready for the cloud.

Background

With default configuration of managed KIE Server, both controller and kie server need to know how to communicate with each other. By default it is REST based communication and thus require to provide credentials while sending requests
  • user and password - for BASIC authentication
  • token for BEARER authentication 
These should be gives as system properties on each side:

  • org.kie.server.user and org.kie.server.password is to be set on controller jvm to instruct what credentails to use when connecting to kie server(s)
  • org.kie.server.controller.user and org.kie.server.controller.password is to be set on kie server jvm to instruct what credentials to use when connecting to controller

This configuration fits nicely in non restricted environment where both controller and KIE Server(s) don't have any limitations to talk to each other. Though it does require that user name and password used by controller to connect to kie servers is the same as it is set globally via system properties and thus will be used whenever talking to any KIE Server instance.

Though this setup can become problematic if there are any restrictions between these two. In some cases controller might be hidden behind firewall. That will then make an issue for it to communicate with KIE Server(s) when needed. Similar this becomes an issue in OpenShift environment where controller and KIE Server(s) are in different namespaces - they won't see each other internal IP.

Here we touch upon another aspect of managed KIE Servers - its location. KIE Server when running in managed mode requires following configuration parameters (given as system properties on jvm that runs the KIE Server):
  • org.kie.server.id - an id that points to server template id defined in the controller
  • org.kie.server.controller - an URL of the controller to connect to upon start
  • org.kie.server.location - an URL of this instance of the KIE Server where it will be accessible over HTTP/REST
The location of the KIE Server is expected to be unique - since this is an URL where the actual instance is accessible. Though this becomes an issue when running kie servers behind load balancer or when running in a cloud based environments. 

It puts us in situation that we either give load balancer URL and by that loose the capabilities of receiving updates from controller (as only one of them will get updates based on load balancer selection) or we bypass load balancer and then loose the capabilities of it for runtime operations. Keep in mind that the location that kie server does provide on connection to controller is then used by (so called) runtime views in workbench - process instances, tasks, etc.

In OpenShift environment that is pretty much the same issue - either public IP is provided which completely hides the individual PODs or internal IP of the POD. It has the same consequences as load balancer with one addition - internal IP won't work at all across namespaces.

Websockets to the rescue...

To resolve all the issues mentioned above, an alternative (and soon to be the default) way of communicating between KIE Server and Controller was introduced. It is based on Websockets that is now available in pretty much any JEE container (including servlet container) and solves pretty much all the issues that were identified for both on premise and in the cloud.


As illustrated on the diagram above, KIE Server is the one who initiate the communication and keeps it active as long as it's alive. That in turn removes any need from KIE Controller to know how to communicate (and by that connect to) with KIE Server instances. So there is no more need to configure any user name or password on controller jvm to talk to KIE Servers, it will simply reuse open channel to connected KIE Servers.

KIE Server is solely responsible for the connection. That means it needs to know where the controller is, how to authenticate when opening connection and how to handle lost connection (e.g. when controller goes down).

So the first two are exactly the same, given as system properties on jvm that KIE Server is going to run:
  • using either BASIC or BEARER authentication 
  • org.kie.server.controller - an URL of the controller to connect to upon start
Lost connections are handled by retry mechanism - as soon as KIE Server gets notification that the connection is closed it will start a background thread that will attempt to connect to controller every 10 seconds. Once it is reconnected that thread is terminated. It will reconnect only if the KIE Server itself is not the one who closed the connection.

Since we keep connection open between kie servers and kie controller then the location given when kie server connects does not have to be unique any more. That solves the issue with running behind load balancer or in OpenShift with different namespaces. System property that provides location (org.kie.server.location) should now be given as the load balancer or public IP in OpenShift. 

NOTE: If you don't run behind load balancer on on-premise setup (not OpenShift) then keep the location of the kie server unique regardless of the websocket being used. Similar rule applies - same public IP/load balancer should be kept for single server template only.

There is no need for any extra configuration to enable websocket based communication, it is based only on the actual URL given as controller url - org.kie.server.controller system property.

-Dorg.kie.server.controller=ws://localhost:8080/kie-wb/websocket/controller

Depending where is your controller you might need to change:
  • localhost - to actual host/IP of your server where controller is deployed to
  • 8080 - to actual port number of your server where controller is deployed to
  • kie-wb - to actual context path of the controller web app 

Both protocols - HTTP/REST and Websocket are active by default and either of them can be used. Though one rule must be kept - use single protocol for all kie servers of given server template. 
Recommended is to keep single protocol across all kie servers connected to single controller.

Workbench that provides UI for process related operations (Process Instance, Process Definitions, Tasks perspectives) will utilise websocket channel only for administration operations, that is:
  • controller based operations to manage kie servers
  • data set queries registration required by runtime views
All other operations, like getting user tasks, getting process definitions or instances, will use regular REST based communication as it will call endpoints on behalf of logged in user to enforce security.

With this enhancement managed KIE Server is way nicer option to run in cloud and behind load balancer than ever before :)

Stay tuned for more to come!


czwartek, 6 lipca 2017

Make use of rules to drive your cases

In case management that was recently released with jBPM version 7, there is a change in the way we look at cases. They are more data driven than flow driven. Of course users are free to define parts of the case definition to be process fragments (see the attached sample) but what is important is the look at cases as data that are handled.

The steps required to resolve a case are mainly driven by data - that can be people involved in the case (based on the available data take certain actions) or the system itself can decide based on that data to trigger further actions.

This article is about the later case - system is taking decisions for further actions. And what is better to take this than business rules :)

Let's have a look at simple scenario, where we have basic car insurance case definition that looks like this


There are two roles involved:

  • insured 
  • insuranceRepresentative
At any given point in time data can be inserted into the case instance - to be precise its case file. CaseFile data is under constant supervision of the rule engine and thus we can build up rules that will react to the data that our case instance contains.

To provide very simple scenario, let's assume that at some point there is a need for more information to be collected from the insured. This simple case could also be handled by a human actor e.g. person who takes the role of insurance representative in particular case instance. Though for the sake of example we can just make a business rule that will react immediately once the data of the case instance indicate there is a decision to ask for more details. It will automatically create human task assigned to the insured.


rule "ask user for details"

when 
    $caseData : CaseFileInstance()
    String(this == "AskForDetails") from $caseData.getData("decision")
          
then 
    $caseData.remove("decision");
    CaseService caseService = (CaseService) ServiceRegistry.get().service(ServiceRegistry.CASE_SERVICE);
    Map<String, Object> parameters = new HashMap<>();
    parameters.put("reason", "How did it happen?");
    caseService.addDynamicTask($caseData.getCaseId(), caseService.newHumanTaskSpec("Please provide additional details", "Action", "insured", null, parameters));
    
end


So that simple rule will do exactly that. If there is a decision in the case file that is set to AskForDetails, then the rule will:
  • remove the decision from the case to avoid rule loop
  • use ServiceRegistry to get hold of case service instance
  • configure task input parameters
  • finally add dynamic task to the case instance that is assigned to use who has role insured in the case instance
That's all, as simple as that :) Obviously this is simplistic use case but it opens the door for integration between rules and case management to make it even more powerful for the users. Since that is all about dealing with data, combination of rules and case management is a perfect fit.

Note, ServiceRegistry is part of jbpm-services-api module so make sure that is available on the class path. When building project (kjar) in workbench there is no need to add anything else but if you would like to build it outside of workbench make sure you add following dependencies to your project - both in scope provided.
  • org.jbpm:jbpm-services-api
  • org.jbpm:jbpm-case-mgmt-api
ServiceRepository can also be used for regular processes in exact same way. Here are the services that are automatically registered in the registry:

org.jbpm.services.api.DefinitionService
org.jbpm.services.api.DeploymentService
org.jbpm.services.api.ProcessService
org.jbpm.services.api.RuntimeDataService
org.jbpm.services.api.UserTaskService
org.jbpm.services.api.admin.ProcessInstanceAdminService
org.jbpm.services.api.admin.ProcessInstanceMigrationService
org.jbpm.services.api.admin.UserTaskAdminService
org.jbpm.services.api.query.QueryService
org.jbpm.casemgmt.api.CaseRuntimeDataService
org.jbpm.casemgmt.api.CaseService


ServiceRegistry has public static members for all out of the box services so that is the recommended way to look them up in the registry. If for whatever reason you prefer to use string based it's the simple name of the interface listed above e.g. DefinitionService or CaseService.

One final note, this comes with jBPM 7.1 which is just behind the corner...

piątek, 30 czerwca 2017

Execution error - how to deal with unexpected in jBPM 7.1

jBPM technical error handling is based on transactionality and going back to last (stable) state. That means in case of an error (of any kind) that is not handled by the process, will result in rolling back of entire transaction and leaving process instance in the previous wait state. Any trace about this is only visible in the logs and usually is displayed to the caller (who sent the request to process engine).

That in some cases might not be enough and thus additional error handling is required to provide:
  • Better traceability
  • Visibility in case of critical processes
  • Reporting and analytics - based on error situations 
  • External system error handling and compensation

Overview

Configurable error handling is introduced in version 7.1 that will be responsible for catching any technical errors thrown throughout the process engine execution (including task service). Any technical exception means:
  • Anything that extends java.lang.Throwable
  • Was not handled before - like process level error handling
There are several components that made up the error handling mechanism and allow pluggable approach to extend its capabilities.

The entry point from process engine point of view is ExecutionErrorManager that is integrated with RuntimeManager which is then responsible for providing it to underlying components - KieSession and TaskService. ExecutionErrorManager from the api point of view gives access to:

  • ExecutionErrorHandler - the heart of the error handling mechanism
  • ExecutionErrorStorage - pluggable storage for execution error information
ExecutionErrorHandler is bound to the life cycle of RuntimeEngine, meaning is created when new runtime engine is created and is destroyed when RuntimeEngine is disposed. Single instance of the ExecutionErrorHandler is used within given execution context (transaction). Both KieSession and TaskService uses that instance to inform the error handling about processed nodes/tasks. ExecutionErrorHandler allows to inform it about:
  • Starting processing of a given node instance
  • Completion of processing of a given node instance
  • Starting processing of a given task instance
  • Completion of processing of a given task instance

Such information is mainly used for errors that are of unknown type - in other words errors that do not provide information about the process context. For example, data base exception upon commit time will not carry any process information meaning that would make the error information really poor and pretty much useless. 

ExecutionErrorStorage is pluggable strategy to allow various ways of persisting information about execution errors. Store is used directly by the handler that gets an instance of the store upon creation (at the time RuntimeEngine is created). Default store implementation is based on data base table. Every error will be stored into that table with all information available in it. Not all errors might have all the details they are dependent of the type and possibility to extract information from the error.


Error types and filters

Since error handling will attempt to catch and handle any kind of error it needs a way to categorize errors to be able to properly extract information out of the error and make it pluggable as users might use their special types of error to be thrown and handled in different way then one provided out of the box.
Error categorization and filtering is based on so called ExecutionErrorFilters. This is simple interface that is solely responsible for building instance of ExecutionError that is later on stored via the ExecutionErrorStorage. It has following methods:
  • accept to indicate if given error can be handled by the filter
  • filter where the actual filtering/handling etc happens
  • getPriority indicates the priority which is used when calling filters
Filters provide their priority as only one filter can process given error - this is mainly to avoid to have multiple filters returning alternative “views” of the same error. That’s why priority was introduced to allow more specialized filters to see if they can accept the error and if so deal with it, otherwise let it to be handled by another filter.

ExecutionErrorFilter can be provided using ServiceLoader mechanism that is quite easy and proven so extending capability of the error handling is very simple.

Out of the box ExecutionErrorFilters:

Class name
Type
Priority
org.jbpm.runtime.manager.impl.error.filters.ProcessExecutionErrorFilter
Process
100
org.jbpm.runtime.manager.impl.error.filters.TaskExecutionErrorFilter
Task
80
org.jbpm.runtime.manager.impl.error.filters.DBExecutionErrorFilter
DB
200
org.jbpm.executor.impl.error.JobExecutionErrorFilter
Job
100

The lower value of the priority the higher execution order it gets. In above table then filters will be invoked in following order:
  • Task
  • Process
  • Job
  • DB

Error acknowledgment

By definition every error that is caught and stored is unacknowledged, that means it is to be handled by someone/something (in case of automatic error recovery). That is the base approach to allow to filter on existing errors if they have been already taken care of or not. Acknowledgment on each error saves user who did the acknowledgment and the time stamp for traceability purpose.

Since the ExecutionErrorFilter is responsible for creating the ExecutionError instance, different implementations might decide that the acknowledgement is set to true immediately when the error is handled - maybe because there is a notification sent to some issue tracking system or an email to administrator. Again, that is up to concrete implementation of the filters or even storage.

Auto acknowledgement of execution errors

By default, executions errors are created unacknowledged and thus require manual action to be performed otherwise they will always be seen as information that requires attention. In case of bigger volumes, manual actions can be time consuming and not suitable in some situations. To help with that auto acknowledgement of errors has been provided. It is based on scheduled jobs (via jbpm executor) and there are three types of jobs available:
  • org.jbpm.executor.commands.error.JobAutoAckErrorCommand
    • Job responsible for finding out jobs that previously failed but now are either cancelled, completed or rescheduled for another execution. This job will only acknowledge execution errors of type “Job”
  • org.jbpm.executor.commands.error.TaskAutoAckErrorCommand 
    • Job responsible for auto acknowledgment of user task execution errors for task that previously failed but now are in one of the exit states (completed, failed, exited, obsolete). This job will only acknowledge execution errors of type “Task”
  • org.jbpm.executor.commands.error.ProcessAutoAckErrorCommand
    • Job responsible for auto acknowledgment of process instances that have errors attached. It will acknowledge errors in case process instance is already finished (completed or aborted) or the task that the error originated from is already finished - based on init_activity_id value. This job will acknowledge any type of job that matches above criteria.
All three jobs can be registered on KIE Server to automatically auto acknowledge errors and they are reoccuring type of jobs, meaning if not explicitly said to be SingleRun they will run once a day by default. They can be configured to run on any time intervals by providing NextRun as time expression e.g. 2h, 5d etc

Last parameter that these jobs support is EmfName to provide custom name of entity manager factory that should be used when searching for jobs to acknowledge. All of these parameters are optional.

There is a base class that is extended by individual jobs and can be seen as the starting point for additional implementation of auto acknowledge options
org.jbpm.executor.commands.error.AutoAckErrorCommand

Once extended there are two methods to be implemented:
  • protected abstract List<ExecutionErrorInfo> findErrorsToAck(EntityManager em);
  • protected abstract String getAckRule();
First is the most important as it abstracts the way individual jobs find error to be acknowledged. Second is to provide the rule based on which the errors were found. It is only for logging purpose to indicate what led to auto acknowledge.

Services and access to error information

Access to error information (for the out of the box storage) is through jbpm services. The two admin facing services provide basic access to the error information and to be able to acknowledge the errors:

  • ProcessInstanceAdminService
    • allow to find execution errors of any type and mainly focusing on search capability around process instance
  • UserTaskAdminService 
    • allow to find Task type of errors and focuses on search es around task details like name or id
Since the way of looking for errors can be pretty much unlimited, above services provide the basic access only. For more advanced/tailored searches advanced queries should be used. There is out of the box query mapper available to directly produce the ExecutionError instance out of the data set.

Similar access and capabilities are exposed over KIE Server Remote api and its client library.

Clean up mechanism

To be able to maintain the ExecutionErrorInfo table in good health there is a need to clean it up from time to time. Since the errors can be there for quite some time, depending on the life cycle of the processes, there is no direct api to clean it up. Instead there is jBPM executor command that can be scheduled for recurring execution to periodically clean up errors. There are several options to be used for clean up command:
  • DateFormat 
    • date format for further date related params - if not given yyyy-MM-dd is used (pattern of SimpleDateFormat class)
  • EmfName 
    • name of entity manager factory to be used for queries (valid persistence unit name)
  • SingleRun 
    • indicates if execution should be single run only (true|false)
  • NextRun 
    • provides next execution time (valid time expression e.g. 1d, 5h, etc)
  • OlderThan 
    • indicates what errors should be deleted - older than given date
  • OlderThanPeriod 
    • indicated what errors should be deleted older than given time expression (valid time expression e.g. 1d, 5h, etc)
  • ForProcess 
    • indicates errors to be deleted only for given process definition
  • ForProcessInstance 
    • indicates errors to be deleted only for given process instance
  • ForDeployment 
    • indicates errors to be deleted that are from given deployment id
Important note is that the command will always (regardless of parameters given) restrict deletion to already completed/aborted process instances. If there is any other need to deal with that it should be extended or provided as custom command.

Time to see this in action

Below screen cast shows this error handling in action. Moreover it shows excellent UI support for it which I would like to give credits to the team that have worked on it - Cristiano, Neus and Rafael.

In the screen cast you'll see a simple process that based on variable either continues as expected or throws an exception. This exception is then handled as execution error and is available to users/administrators to deal with. In addition it will illustrate use of auto acknowledge jobs to based on various conditions acknowledge the errors. Please be patient as there are some waiting times in the screen cast while waiting for job to execute :)

Enjoy and stay tuned for more!!!

poniedziałek, 26 czerwca 2017

KIE Server welcomes Narayana

KIE Server (with BPM capabilities) requires data base for persistence. That is well known fact, though to have properly managed persistence there is also need for transaction manager that will ensure consistency of the data jBPM persists.

Since version 7 KIE Server is the only provided out of the box execution server (there is no execution server in workbench) so it got some additional attention to make sure it does perform in the best possible way.

KIE Server supports following runtime environments:

  • WildFly 10.x
  • EAP 7.x
  • WebSphere 9
  • WebLogic 12.3
  • Tomcat 8.x

Since all of the above are supported for jBPM usage they all must provide transaction manager capability. For JEE servers (WildFly, EAP, WebSphere, WebLogic) KIE server relies on what the application server provides. Though for Tomcat the story is slightly different...

Tomcat does not have transaction manager capabilities so to make use of jBPM/KIE Server on it, it required an external transaction manager to be configured. Until now it was recommended to use bitronix as jBPM test suite was running on it and it does provide integration with Tomcat (plus it covered db connection pooling and JNDI provider for data source look ups). But this has now changed ...

Starting from jBPM 7.1 KIE Server on Tomcat runs with Narayana, the state of the art transaction manager that nicely integrates with Tomcat and makes the configuration much easier than what was needed with bitronix - and is more native to Tomcat users.

Before I jump into details on how to configure it on Tomcat, I'd like to take the opportunity and give spacial thanks to:

Tom Jenkinson and Gytis Trikleris

for their tremendous help and excellent support while working on this change.

Installation notes - with BPM capabilities

Let's see what is actually needed to configure KIE Server on Tomcat with Narayana:
  • (1) Copy following libraries into TOMCAT_HOME/lib
    • javax.security.jacc:javax.security.jacc-api
    • org.kie:kie-tomcat-integration
    • org.slf4j:artifactId=slf4j-api
    • org.slf4j:artifactId=slf4j-jdk14
  • (2) Configure users and roles in tomcat-users.xml (or different user repository if applicable)
  • (3) Configure JACC Valve for security integration Edit TOMCAT_HOME/conf/server.xml and add following in Host section after last Valve declaration 
         <Valve className="org.kie.integration.tomcat.JACCValve" />
  • (4) Create setenv.sh|bat in TOMCAT_HOME/bin with following content
    CATALINA_OPTS="
    -Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry 
    -Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpm 
    -Djbpm.tm.jndi.lookup=java:comp/env/TransactionManager 
    -Dorg.kie.server.persistence.tm=JBossTS 
    -Dhibernate.connection.release_mode=after_transaction 
    -Dorg.kie.server.id=tomcat-kieserver 
    -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server 
    -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
    "
       Items marked in green are related to persistence and transaction.
       Items marked in blue are general KIE Server parameters needed when running in managed mode.
  • (5) Copy JDBC driver jar into TOMCAT_HOME/lib depending on the data base of your choice
  • (6) Configure data source for jBPM extension of KIE Server 
           Edit TOMCAT_HOME/conf/context.xml and add following within Context tags of the file:
     <Resource 
           name="sharedDataSource" 
           auth="Container" 
           type="org.h2.jdbcx.JdbcDataSource" 
           user="sa" 
           password="sa"
           url="jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;MVCC=TRUE" 
           description="H2 Data Source" 
           loginTimeout="0" 
           testOnBorrow="false"
           factory="org.h2.jdbcx.JdbcDataSourceFactory"/>
           This is only an example to use H2 as data base, for other data bases look at
           Tomcat's configurations docs.

           Once important note, please keep the name of the data source as sharedDataSource

  • (7) Last but not least is to configure XA recovery 
  • Create xa recovery file next to the context.xml with data base configuration with following content: 
    <?xml version="1.0" encoding="UTF-8"?> 
    <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> 
    <properties> 
      <entry key="DB_1_DatabaseUser">sa</entry> 
      <entry key="DB_1_DatabasePassword">sa</entry> 
      <entry key="DB_1_DatabaseDynamicClass"></entry> 
      <entry key="DB_1_DatabaseURL">java:comp/env/h2DataSource</entry> 
    </properties> 

    Append to CATALINA_OPTS in setenv.sh|bat file following: 
    -Dcom.arjuna.ats.jta.recovery.XAResourceRecovery1= \
    com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery\;
    abs://$CATALINA_HOME/conf/xa-recovery-properties.xml\ \;1
    BasicXARecovery supports following parameters: 
    • path to the properties file 
    • the number of connections defined in the properties file


Installation notes - without BPM capabilities

In case you want to use KIE Server without BPM capabilities - for instance for Rules or Planning - then you can completely skip steps from 4 (in step 4 use only the marked in blue items) and still run KIE Server on Tomcat.

With that, I'd like to say welcome to Narayana in KIE Server - well done!

piątek, 28 kwietnia 2017

Control business rules execution from your processes

Business processes benefit a lot from business rules, actually they are like must to have in today's constantly changing world. jBPM comes with excellent integration with Drools so that already has a lot of advantages to provide multiple levels of business rules integration:

  • BPMN2 Business Rule task
  • Conditional events (start, intermediate and boundary)
  • Sequence flow conditions

Default integration with rules is that process engine shares the same working memory with rules engine. That in many cases is desired but the main limitations is life cycle of rules and processes are tightly coupled - they are part of the same project (aka kjar) or in other words knowledge base. 

This in turn makes it harder to decouple it and manage life cycle of business rules independently. There are domains out there that have much higher frequency of change in the rules rather than in processes so this tight integration makes it hard to easily upgrade rules without affecting processes - keep in mind that processes are usually long running and can't be easily migrated to next version just because of rule changes.

To resolve this, alternative way of dealing with business rules from within process has been introduced. It does support only business rule tasks and not conditional events or sequence flow conditions. 

What's in the toolbox?

This new feature introduces two new components that serve different use cases:
  • Business Rule Task based on work item handler
  • Remote Business Rule Task based on work item handler and Kie Server Client
Both of them support two types of rule evaluation:
  • DRL
  • DMN 
Let's look at them in details, starting first with Business Rule Task handler.

Business Rule Task handler

Main use case for this alternative business rule task is to decouple process knowledge base from rule knowledge base. That means we will have two projects - kjars to be responsible for handling these two business assets. Rule project is solely responsible for defining business rules which can then be updated at any time without affecting processes. 



Next process project (kjar) does focus only on business processes and hand rule evaluation to the rule project. The only common place is that Business Rule Task handler that defines what kjar (rule project) should be used within process project. 

Since the alternative business rule task is based on work item node, it requires work item handler to be registered in process project. As usual work item is registered in deployment descriptor (accessible via project editor)


Business Rule Task handler is implemented by org.jbpm.process.workitem.bpmn2.BusinessRuleTaskHandler and it expects following arguments:
  • groupId - mandatory argument referring to rule's project groupId
  • artefactId - mandatory argument referring to rule's project artefactId
  • version - mandatory argument referring to rule's project version 
  • scanner interval - optional used to schedule periodic scans for rules updates - in milliseconds
Business Rule Task handler supports two types of rule evaluation:
  • DRL - that allows to specify following data input on task level to control its execution
    • Language - set to DRL
    • KieSessionName - name of the kie session as defined in kmodule.xml of rule project - can be left empty which means default kie session is used
    • KieSessionType - stateless or stateful - stateless is default if not given
    • all other data inputs will be inserted into working memory as facts thus will be available for rule evaluation. Note that data input and output should be matched by name to properly retrieve updated facts and put back into process variables
  • DMN - that allows to specify following as data input on task level:
    • Language - set to DMN
    • Namespace - DMN namespace to be used
    • Model - model to be used
    • Decision - optional decision name to be used
    • all other data inputs are added to DMN context for evaluation. All results from DMNResult are available as data outputs referred by name as defined in DMN
NOTE: Since projects are decoupled they need to have common data model that both will operate on. Currently that model must be on parent class loader instead of kjar level due to classes being loaded by different class loaders and thus making rules not match them properly. In case of execution server (KIE Server) model jar must be added to WEB-INF/lib of the KIE Server app and added to both projects as dependency in provided scope.


    Remote Business Rule Task handler

    Remote flavour of the handler comes with KIE Server Client and aims at providing even more decoupling as it will communicate with external decision service (KIE Server) to evaluate rules. This provide more flexibility and removes the limitation of common data model to be present on the application level. With remote business rule task users can simply define model project as project dependency (with default scope) for both projects - rules and processes. Since there is marshalling involved there is no problem with class loaders and thus working as expected.

    From project authoring point of view there is only one difference - Business Rule Task requires additional data input - ContainerId that defines which container on execution server it should target. Obviously this can be container alias as well for more flexibility.



    Similar as Business Rule Task, this handler supports both DRL and DMN. Depending on which type is needed it supports different data inputs:
    • DRL
      • Language - DRL
      • ContainerId - container id or alias of the container to be used to evaluate rules
      • KieSessionName - name of the kie session on execution server for given container
    • DMN
      • Language - DMN
      • ContainerId - container id or alias of the container to be used to evaluate rules
      • Namespace - DMN namespace to be used
      • Model - model to be used
      • Decision - optional decision name to be used
    Any other data input will be send as data to the execution server. Results handling is exactly the same as with Business Rule Task handler.


    Remote Business Task Handler is implemented by org.kie.server.client.integration.RemoteBusinessRuleTaskHandler. Handler expects following arguments:
    • serverUrl - location of the execution server that this handler should use, it can be comma separated list of URLs that will be used by load balancer
    • username - user to be used to authenticate calls to execution server
    • password - password to be used to authenticate calls to execution server
    • classLoader - projects class loader to gain access to custom types 

    Registration of the handler is done exactly the same, via deployment descriptor. 


    That would conclude the description of new capabilities. If you'd like to try it yourself simply clone this repository to your workbench, build and run it!

    Here you can see this being done...



    That's all for now, though stay tuned as more will come :)




    czwartek, 13 kwietnia 2017

    Email configuration for jBPM

    Quite frequent question from users is how to send emails from within process instance. This is rather simple activity but requires bit of configuration upfront.

    So lets start with application server configuration to provide JavaMail Sessions via JNDI directly to jBPM so process engine does not need to be concerned too much with infrastructure details.

    In this article, WildFly 10 is used as application server and the configuration is done via jboss cli. As mail server Gmail is used as it makes it quite common choice.

    Configure WildFly mail session

    This is quite simple as it requires to configure socket binding and then the actual JNDI resource of mail session type:

    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=jbpm-mail-smtp/:add(host=smtp.gmail.com, port=465)

    /subsystem=mail/mail-session=jbpm/:add(jndi-name=java:/jbpmMailSession, from=username@gmail.com)
    /subsystem=mail/mail-session=jbpm/server=smtp/:add(outbound-socket-binding-ref=jbpm-mail-smtp, ssl=true, username=username@gmail.com, password=password)

    Items in bold are external references:
    • host - smtp server host name
    • port - smtp server port number
    • JNDI name the mail session will be bound to
    Items in red must be replaced with actual Gmail account details that shall be used when sending emails:
    • from - address that should be valid email account
    • username - account to be used when login to SMTP server
    • password - account's password to be used when login to SMTP server

    When starting WildFly server, you have to provide the JNDI name to be used by jBPM to find the mail session, this is given as system property:

    ./standalone.sh .... -Dorg.kie.mail.session=java:/jbpmMailSession

    this is all that is needed from application server point of view.

    Configure jBPM for sending emails

    There are two email components that can be used by jBPM to send emails:
    • Email service task (backed by work item handler)
    • User task notifications
    Both of them can rely on application server mail sessions, so configuration of the application server applies to both email capabilities.

    Work item handler configuration

    Email activity is based on work item mechanism so that requires work item handler to be registered for it. Work item handlers can be registered via deployment descriptor that can be done on kjar level or server level. Below screen shot illustrates the configuration of the deployment descriptor on kjar level done in workbench


    what this does - it's registering org.jbpm.process.workitem.email.EmailWorkItemHandler for Email work item/service task and since it should use mail session from JNDI we need to reset the handler property so they won't get in the way (argument of the EmailWorkItemHandler constructor):
    • host set to null
    • port set to -1
    • username set to null
    • password set to null
    • useTLS set to true
    this is done to make sure all these values are taken from JNDI mail session of the application server.

    As it was mentioned, this email registration can also be done on server level, global deployment descriptor that all kjars will inherit from. See more about deployment descriptor in the docs.

    That's all from jBPM configuration point of view. 

    Process with Email task

    Last thing is to actually take advantage of the configured Email infrastructure so let's model most basic process that the only thing it does is sending email and completing.


    Email task can be found in the left hand side panel - palette - under Service Tasks category.

    Once placed on the process flow you need to provide four data input to instruct email work item handler how to compose the email message:
    • From - valid email address
    • To - valid email address of the recipients (to specify multiple addresses separate them with semicolon ';')
    • Subject - email subject
    • Body - email body (can include html)

    That's all, just build and deploy it and enjoy receiving emails from your processes.

    czwartek, 6 kwietnia 2017

    Case management application in workbench

    As part of coming jBPM version 7 I'd like to present a new component that will allow an easy and comprehensive look into case management. As described in the article case management should be business focused and domain specific to bring in the most value to the end users who in many cases are not technical. But there is also the other side of the coin - administrators or technical business users. They are as important as end users because in many cases these are the people that keep the apps running and available for end users.

    With jBPM 7 they are certainly not left behind. A new component is available for that audience to allow them to have quick and rather complete view at the cases (both definitions and instances). To make it possible to deal with various type of cases that component was made generic to:

    • bring in visibility to the technical users
    • provide insight in where the case instance is
    • allow to perform certain operations on a case instance which might not be visible to end users through the case app
    The good thing is that the application can be used standalone or can be automatically provisioned by workbench and accessible from within the workbench UI.

    By launching the Case Management Showcase application you will be transferred to a new window to allow you to operate on cases:
    • start new instance of case definition
    • view and administrate on already active case instances
    • use workbench runtime views (Process Instances and Tasks) to interact with case instance activities

    As can be seen, the example show the Order IT hardware case instance, one that is shown in the case app article. I'd like to show that exact same capability exposed to end users through the case app can be performed by technical/admin users by using the Case Management app and workbench.




    That's as simple as that, those who are familiar with workbench as an environment will find themselves in there and navigate through the screen easily. Again, its target is technical users as that does not bring the business context thus might lead end users (non technical ones) to be confused and lost a bit. 

    Running workbench with case management app

    So that illustrates how it actually works but how to set this up?

    First of all, let's configure the runtime environment for workbench, there is not much of new  things compared to standard installation of the workbench:
    • user and roles setup
      • make sure there is an application user that has at least following roles:
        • kie-server
        • user
      • make sure there is a management user - this will be used by the provisioning service of the workbench to deploy automatically case management application
    • security domain setup 
      • make sure that the security domain used by workbench (by default it's other) has additional login module defined (org.kie.security.jaas.KieLoginModule)
    • system properties set for JVM running server
      • org.jbpm.casemgmt.showcase.deploy=true
      • instruct workbench to deploy case management showcase apps when starting
      • org.kie.server.location=http://localhost:8230/kie-server/services/rest/server
      • location of the kie server that the case management app is going to talk to
      • org.jbpm.casemgmt.showcase.wildfly.username
      • optional user name (of the management user) in case it's different than admin
      • org.jbpm.casemgmt.showcase.wildfly.password
      • optional user password (of the management user) in case it's different than admin
      • org.jbpm.casemgmt.showcase.wildfly.port
      • optional wildfly port that server is running on - default 8080
      • org.jbpm.casemgmt.showcase.wildfly.management-port 
      • optional wildfly management port - default 9990 
      • org.jbpm.casemgmt.showcase.wildfly.management-host 
      • optional wildfly management host name - default localhost
      • org.jbpm.casemgmt.showcase.path
      • local file path that points to war file to be deployed, can be used instead of relying on maven to download it
    That should be enough to start the workbench and provision Case Management application to be automatically deployed. 

    Assuming there are most of the defaults in use, following command is enough to start the server where workbench is deployed:
    ./standalone.sh 
    --server-config=standalone-full.xml 
    -Dorg.jbpm.casemgmt.showcase.deploy=true 
    -Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server

    As part of the startup you should see Case management app being provisioned:
     [org.jbpm.workbench.wi.backend.server.casemgmt.service.CaseProvisioningExecutor] (EJB default - 1) Executing jBPM Case Management Showcase app provisioning...
    ....
    [org.jbpm.workbench.wi.backend.server.casemgmt.service.CaseProvisioningExecutor] (EJB default - 1) jBPM Case Management Showcase app provisioning completed.

    NOTE: the provisioning is based on maven resolution of the jbpm-case-mgmt-showcase so depending on the download time it can timeout from time to time. To overcome this you can download that artefact manually via maven to speed it up.

    Next, is to clone the Order IT hardware case project into workbench and build and deploy it to KIE Server for execution. Once it's built and deployed it's ready for execution. 



    That concludes how to get started with brand new stuff coming with version 7 ... which is really around the corner so start preparing for it!

    Special credits go to Cristiano and Neus for excellent work that resulted in this new Case Management App.