Abstract:
Techniques and architectures for data modeling and management. Data modeling services are provided to agents within multiple different operating environments of a computing environment having at least one database stored on one or more physical memory devices communicatively coupled with one or more hardware processors the one or physical memory devices. Building and versioning of data modeling projects is coordinated and data utilized for the data modeling projects with the one or more hardware processors.
Abstract:
Techniques and architectures for data modeling and management. Data modeling services are provided to agents within multiple different operating environments of a computing environment having at least one database stored on one or more physical memory devices communicatively coupled with one or more hardware processors the one or physical memory devices. Building and versioning of data modeling projects is coordinated and data utilized for the data modeling projects with the one or more hardware processors.
Abstract:
In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing predictive engine evaluation and replay of engine performance. An exemplary system may include, for example: means selecting a first set of one or more algorithms for a machine learning model; tuning a first group of predictive engine parameters for the machine learning model; training the machine learning model with one or more sources of data using the selected first set of one or more algorithms and the first group of tuned predictive engine parameters to generate a first predictive engine variant from the trained machine learning model; selecting a second set of one or more algorithms for a machine learning model which are different than the first set; tuning a second group of predictive engine parameters for the machine learning model which are different than the first group; training the machine learning model with the one or more sources of data using the selected second set of one or more algorithms and the second group of tuned predictive engine parameters to generate a second predictive engine variant from the trained machine learning model; performing multiple experiments using the first and second predictive engine variants; comparing results from the multiple experiments; and deploying either the first predictive engine variant or the second predictive engine variant based on the comparison of the results of the multiple experiments. Other related embodiments are disclosed.
Abstract:
Techniques and architectures for data modeling and management. Data modeling services are provided to agents within multiple different operating environments of a computing environment having at least one database stored on one or more physical memory devices communicatively coupled with one or more hardware processors the one or physical memory devices. Building and versioning of data modeling projects is coordinated and data utilized for the data modeling projects with the one or more hardware processors.
Abstract:
Methods, systems, and devices for multi-tenant workflow processing are described. In some cases, a cloud platform may utilize a set of pre-defined batch processes (e.g., workflow templates) and tenant-specific configurations for instantiating and executing tenant-specific batch processes for each tenant of a user. As such, the cloud platform may utilize common data process workflows for each tenant, where a configuration specifies tenant-specific information for the common data process workflows. The workflow templates may include a set of job definitions (e.g., actions for a server to execute) and a schedule defining the frequency for running the templates for a specific project. The configurations may indicate a tenant to execute the workflow templates for, and may include tenant-specific information to override default template information. The cloud platform or a designated server or server cluster may instantiate and execute workflows based on one or more combinations of configurations and indicated workflow templates.
Abstract:
In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing predictive engine evaluation and replay of engine performance. An exemplary system may include, for example: means selecting a first set of one or more algorithms for a machine learning model; tuning a first group of predictive engine parameters for the machine learning model; training the machine learning model with one or more sources of data using the selected first set of one or more algorithms and the first group of tuned predictive engine parameters to generate a first predictive engine variant from the trained machine learning model; selecting a second set of one or more algorithms for a machine learning model which are different than the first set; tuning a second group of predictive engine parameters for the machine learning model which are different than the first group; training the machine learning model with the one or more sources of data using the selected second set of one or more algorithms and the second group of tuned predictive engine parameters to generate a second predictive engine variant from the trained machine learning model; performing multiple experiments using the first and second predictive engine variants; comparing results from the multiple experiments; and deploying either the first predictive engine variant or the second predictive engine variant based on the comparison of the results of the multiple experiments. Other related embodiments are disclosed.
Abstract:
Techniques and architectures for data modeling and management. Data modeling services are provided to agents within multiple different operating environments of a computing environment having at least one database stored on one or more physical memory devices communicatively coupled with one or more hardware processors the one or physical memory devices. Building and versioning of data modeling projects is coordinated and data utilized for the data modeling projects with the one or more hardware processors.
Abstract:
Methods, systems, and devices for multi-tenant workflow processing are described. In some cases, a cloud platform may utilize a set of pre-defined batch processes (e.g., workflow templates) and tenant-specific configurations for instantiating and executing tenant-specific batch processes for each tenant of a user. As such, the cloud platform may utilize common data process workflows for each tenant, where a configuration specifies tenant-specific information for the common data process workflows. The workflow templates may include a set of job definitions (e.g., actions for a server to execute) and a schedule defining the frequency for running the templates for a specific project. The configurations may indicate a tenant to execute the workflow templates for, and may include tenant-specific information to override default template information. The cloud platform or a designated server or server cluster may instantiate and execute workflows based on one or more combinations of configurations and indicated workflow templates.
Abstract:
Disclosed are methods and systems of tracking the deployment of a predictive engine for machine learning, including steps to deploy an engine variant of the predictive engine based on an engine parameter set, wherein the engine parameter set identifies at least one data source and at least one algorithm; receive one or more queries to the deployed engine variant from one or more end-user devices, and in response, generate predicted results; receive one or more actual results corresponding to the predicted results; associate the queries, the predicted results, and the actual results with a replay tag, and record them with the corresponding deployed engine variant.