摘要:
An extensible process design provides an ability to dynamically inject changes into a running process instance, such as a BPEL instance. Using a combination of BPEL, rules and events, processes can be designed to allow flexibility in terms of adding new activities, removing or skipping activities and adding dependent activities. These changes do not require redeployment of the orchestration process and can affect the behavior of in-flight process instances. The extensible process design includes a main orchestration process, a set of task execution processes and a set of generic trigger processes. The design also includes a set of rules evaluated during execution of the tasks of the orchestration process. The design can further include three types of events: an initiate process event, a pre-task execution event and a post-task execution event. These events and rules can be used to alter the behavior of the main orchestration process at runtime.
摘要:
A graphical user interface (GUI) displays a flow of activities of a business process, including any portion thereof from which capture of data is permitted. The GUI receives, in an operation, at least an indication of a business process portion from which data is to be captured (“sensor”), an identification of an endpoint to which captured data is to be transferred, and a type of the endpoint which identifies (through a mapping) a predetermined software. A sensor may be added any number of times (through a single GUI or though multiple GUIs) by repeatedly performing the operation. Also, a given sensor may be associated with multiple endpoints. Computer(s) executing the business process check whether or not a sensor is present, on execution of the business process portion, and if present, then execute the corresponding predetermined software(s) to transfer data from the sensor directly to the respective endpoint(s).
摘要:
Systems and methods are described for providing task chaining as part of modeling a business process (e.g. a BPEL process). Chained tasks maintain a reference to the previous task and during retrieval of that task, the system can append relevant information, including but not limited to task history, attachments and comments of the previous task. Task chaining can be enabled by selecting a previously completed task and marking that the current task chains the selected task. In one embodiment, tasks are chained across multiple instances of a process. Accordingly, tasks in different processes can be chained together to obtain access to the same context information and other data.
摘要:
An extensible process design provides an ability to dynamically inject changes into a running process instance, such as a BPEL instance. Using a combination of BPEL, rules and events, processes can be designed to allow flexibility in terms of adding new activities, removing or skipping activities and adding dependent activities. These changes do not require redeployment of the orchestration process and can affect the behavior of in-flight process instances. The extensible process design includes a main orchestration process, a set of task execution processes and a set of generic trigger processes. The design also includes a set of rules evaluated during execution of the tasks of the orchestration process. The design can further include three types of events: an initiate process event, a pre-task execution event and a post-task execution event. These events and rules can be used to alter the behavior of the main orchestration process at runtime.
摘要:
One embodiment of the present invention provides a system that facilitates accessing communication queues using a public network. The system operates by first generating a message or messages at a client. The system then formats these messages in a publicly available format. Next, the system communicates the messages across the public network to a web server. The web server receives the messages and transforms the messages to a database specific format. The web server then passes the messages to a queue within a database server across a proprietary network. In one embodiment of the present invention, the system includes queue-to-queue propagation with exactly once guarantees and recovery from failures. In one embodiment of the present invention, the system includes transactional guarantees when a client accesses a queue.
摘要:
Systems and methods are described for providing task chaining as part of modeling a business process (e.g. a BPEL process). Chained tasks maintain a reference to the previous task and during retrieval of that task, the system can append relevant information, including but not limited to task history, attachments and comments of the previous task. Task chaining can be enabled by selecting a previously completed task and marking that the current task chains the selected task. In one embodiment, tasks are chained across multiple instances of a process. Accordingly, tasks in different processes can be chained together to obtain access to the same context information and other data.
摘要:
A buffered message queue architecture for managing messages in a database management system is disclosed. A “buffered message queue” refers to a message queue implemented in a volatile memory, such as a RAM. The volatile memory may be a shared volatile memory that is accessible by a plurality of processes. The buffered message queue architecture supports a publish and subscribe communication mechanism, where the message producers and message consumers may be decoupled from and independent of each other. The buffered message queue architecture provides all the functionality of a persistent publish-subscriber messaging system, without ever having to store the messages in persistent storage. The buffered message queue architecture provides better performance and scalability since no persistent operations are needed and no UNDO/REDO logs need to be maintained. Messages published to the buffered message queue are delivered to all eligible subscribers at least once, even in the event of failures, as long as the application is “repeatable.” The buffered message queue architecture also includes management mechanisms for performing buffered message queue cleanup and also for providing unlimited size buffered message queues when limited amounts of shared memory are available. The architecture also includes “zero copy” buffered message queues and provides for transaction-based enqueue of messages.
摘要:
A repository contains multiple versions of an object but only a single version of the object is supplied when a query is made. The single version is automatically selected from among a number of versions that are otherwise returned in response to the query, based on a configuration associated with a workspace in which the query originates. The selected version of the object is then presented in a version resolved view, without exposing any information related to versioning of the object. Specifically, a number of configurations are established, each configuration containing no more than one version of each object in the repository. However, only one configuration is associated with each workspace from which a query can originate. The configuration that is associated with the workspace depends on whether the workspace is to be used for design of the repository or for use of the repository during live operation. Specifically, a single configuration (hereinafter “design time” configuration) is commonly associated with the workspaces of all developers. When the developers decide that a set of objects in the repository is ready for use in live operation, the set of objects is “deployed” by copying the design time configuration to generate a new configuration (hereinafter “run time” configuration) that contains the most current versions of all objects (as present in the design time configuration). Any number of run time configurations can co-exist with each other and with the design time configuration.
摘要:
A method and apparatus for incremental undo is provided. A process, executing in a database system, establishes a rollback entry in an undo log file as a current rollback entry. The rollback entry, which was selected from a set of rollback entries contained in an undo record, contains data that indicates a change made by a transaction to a data block in the database system. The process first determines whether the rollback entry has been applied by testing a status flag. In one embodiment, the status flag is a bit in a bit vector in the undo block. If the rollback entry has been applied to the database, then the rollback entry is not re-applied; rather, a next rollback entry is established from the set of rollback entries and the process repeats. If the rollback entry has not been applied, then undo information from the rollback entry is retrieved from the undo block and change is generated. The status flag is set to indicate that the rollback entry has been applied and a next rollback entry corresponding to the data block is retrieved. The process repeats until there are no more rollback entries to be performed, then the multiple changes are applied to disk in a single atomic operation.