Abstract:
A method, computer program product and system for generating and maintaining synthetic context events. The steps include searching a data structure of synthetic context-based objects and associated data for a pattern of context exhibited at a first specified frequency within a first specified time period; combining the synthetic context-based objects and associated data exhibiting the pattern of context exhibited at the first specified frequency within the first specified time period into a synthetic context event; and optimizing and maintaining the synthetic context event by searching the data structure for additional synthetic context-based objects and associated data exhibiting a same pattern of context at a second specified time period different than the first specified time period and adding the additional synthetic context-based objects and associated data to the synthetic context event.
Abstract:
A method, system and computer program product for maximizing the utility of data obtained from multiple intersecting data structures and stored in a multi-dimension information space. The method includes the steps of generating a rigid mathematical structure within the multi-dimensional information space; dividing the rigid mathematical structure into segments, each segment having a volume determined by a time of access to the segment relative to an event, a duration of access to the segment and a quantity of data in the segment; and determining a sellable price point for each segment of the rigid mathematical structure based on the volume of the segment.
Abstract:
A method, system and computer program product for reducing an amount of data representing a genetic sequence of an organism using a Hadoop type distributed file system. The method including the steps of breaking a surprisal data filter and an uncompressed genetic sequence into blocks of data of a fixed size; distributing the blocks of data to the plurality of worker nodes within the clusters and replicating the blocks of data within each of the worker nodes; tasking the plurality of worker nodes to perform a map job comprising mapping the surprisal data filter relative to the uncompressed genetic sequence; and when a worker node has reported a completion of the map job, tasking the worker node with a reduce job based on a specific key to an output of surprisal data and associated metadata.
Abstract:
A method, system and computer program product for reducing an amount of epigenetic data representing epigenetic modifications of a genetic sequence of an organism using a Hadoop type distributed file system. The method including the steps of breaking epigenetic data and a reference epigenetic map into blocks of data of a fixed size; distributing the blocks of data to the plurality of worker nodes within the clusters and replicating the blocks of data within each of the worker nodes; tasking the plurality of worker nodes to perform a map job comprising mapping the reference epigenetic map relative to the epigenetic data; and when a worker node has reported a completion of the map job, tasking the worker node with a reduce job based on a specific key to an output of epigenetic surprisal data and associated metadata.
Abstract:
A processor-implemented method, system, and/or computer program product secures data stores. A non-contextual data object is associated with a context object to define a synthetic context-based object. The synthetic context-based object is associated with at least one specific data store in a data structure, where the specific data store contains data that is associated with data contained in the non-contextual data object and the context object. An ambiguous request is received from a user for data related to an ambiguous subject-matter. The context of the ambiguous request from the user is determined and associated with the synthetic context-based object that is associated with said a specific data store, where that specific data store contains data related to the context of a now contextual request from the user. The user is then provided access to the specific data store while blocking access to other data stores in the data structure.
Abstract:
A method, a system and a computer program product for determining whether a change in value of a data item relating to an entity being tracked within a cohort is statistically and contextually significant. A computer captures a plurality data items relating to the entity being tracked at a time N+1. The value of the data item at time N+1 is compared to a value of a historical data item at time N. If the value of the data item at time N+1 is different from the value of the historical data item at time N, determining that a change has occurred. If a change in a data item has occurred, determining whether the change in the data item is related to the entity being tracked is statistically and contextually significant in n space on multiple dimensions.
Abstract:
A computer implemented method, system, and/or computer program product creates a suggested diagnostic test selection. A description of a current patient includes a current medical complaint, medical history, and physical examination result for the current patient. A cohort for the current patient is made up of persons who have a substantially similar medical complaint, medical history, and physical examination result as the current patient. Past diagnostic test sets used to make correct medical diagnoses for persons in the cohort are identified and stored in a cohort diagnostic test database. The past diagnostic test sets are sorted based on increasing levels of detrimental effects posed by each of the past diagnostic test sets. The sorted diagnostic test sets are then presented to a health care provider for the current patient.
Abstract:
A method prevents a cascading failure in a complex stream computer system. The method includes receiving binary data that identifies multiple subcomponents in a complex stream computer system. These identified multiple subcomponents include upstream subcomponents that generate multiple outputs and a downstream subcomponent that executes a downstream computational process that uses the multiple outputs. The method dynamically adjusts which of multiple inputs are used by the downstream subcomponent in an attempt to generate an output from the downstream subcomponent that meets a predefined trustworthiness level for making a first type of prediction. If no variations of execution of one or more functions used by the downstream subcomponent ever produce an output that meets the predefined trustworthiness level for making a first type of prediction, then computer hardware executes a new downstream computational process that produces a different second type of prediction.
Abstract:
A processor-implemented method, system, and/or computer program product generates and implements intuitively comfortable frames of task appropriate frames of reference for multiple dimensions of context constraints for related sets of objects within an integrated development environment (IDE). One or more processors identify a hierarchical set of context constraints for an object, and depict the hierarchical set of context constraints for the object on an IDE using a visual metaphor selected by a user. The processor(s) receive a zoom-in input for a first context constraint in the hierarchical set of context constraints, and place the IDE in mention mode, such that use of the hierarchical set of context constraints against the object is disabled. In response to the IDE being placed in mention mode, the processor(s) display detail of the first context constraint on the IDE, and receive changes to the first context constraint to create a modified first context constraint.
Abstract:
A processor-implemented method, system, and/or computer program product mitigate subjectively disturbing content. A context-based data gravity wells membrane supports one or more gravity wells, which hold subjectively disturbing synthetic context-based objects made up of a non-contextual data object, a first context object, and a second context object. The first context object defines the non-contextual data object, and the second context object describes how subjectively disturbing content generated by combining the non-contextual data object and the first context object is according to predefined parameters described by the second context object. New content is passed across the context-based data gravity wells membrane. Subjectively disturbing content from the new content is trapped by the context-based data gravity well in response to a non-contextual data object and context objects from the new content matching those of the context-based data gravity well, thereby reducing a level of subjective discomfort imposed by the new content.