RELEASE ORCHESTRATION FOR CLOUD SERVICES
    11.
    发明申请

    公开(公告)号:US20200241863A1

    公开(公告)日:2020-07-30

    申请号:US16261495

    申请日:2019-01-29

    Abstract: A release orchestration mechanism for cloud services. According to some implementations, while an app aware proxy routes production traffic to a first application (app) version that runs in a plurality of container orchestration system (COS) pods having first app version containers, configuration information is received. When a threshold number of the COS pods having the second app version containers are live, a validation of the second app version is caused. Then a transition to sending production traffic to the second app version containers is performed. After causing the transition, timers are started based on a time period indicated in the configuration information and the first app version containers are instructed to gracefully shut down. Based on expiration of the timers, any of the COS pods having the first app version containers that are not yet shut down are forced to shut down.

    Providing a routing framework for facilitating dynamic workload scheduling and routing of message queues for fair management of resources for application servers in an on-demand services environment
    16.
    发明授权
    Providing a routing framework for facilitating dynamic workload scheduling and routing of message queues for fair management of resources for application servers in an on-demand services environment 有权
    提供路由框架,用于促进动态工作负载调度和消息队列的路由,以便在按需服务环境中为应用程序服务器的资源进行公平管理

    公开(公告)号:US09348648B2

    公开(公告)日:2016-05-24

    申请号:US13841649

    申请日:2013-03-15

    Abstract: In accordance with embodiments, there are provided mechanisms and methods for facilitating dynamic workload scheduling and routing of message queues for fair management of the resources for application servers in an on-demand services environment. In one embodiment and by way of example, a method includes detecting an organization of a plurality of organization that is starving for resources. The organization may be seeking performance of a job request at a computing system within a multi-tenant database system. The method may further include consulting, based on a routing policy, a routing table for a plurality of queues available for processing the job request, selecting a queue of the plurality of queues for the organization based on a fair usage analysis obtained from the routing policy, and routing the job request to the selected queue.

    Abstract translation: 根据实施例,提供了用于促进动态工作负载调度和消息队列的路由以用于按需服务环境中的应用服务器的资源的公平管理的机制和方法。 在一个实施例中,作为示例,一种方法包括检测为资源挨饿的多个组织的组织。 组织可能正在寻求在多租户数据库系统内的计算系统上执行作业请求。 该方法还可以包括基于路由策略来咨询可用于处理作业请求的多个队列的路由表,基于从路由策略获得的合理使用分析,为组织选择多个队列的队列 ,并将作业请求路由到所选择的队列。

    CLOUD SERVICES RELEASE ORCHESTRATION

    公开(公告)号:US20230087544A1

    公开(公告)日:2023-03-23

    申请号:US18049265

    申请日:2022-10-24

    Abstract: According to some implementations, while a proxy routes production traffic to a first application (app) version that runs in a plurality of container orchestration system (cos) pods having first app version containers, configuration information is received including an identification of a second app version container image for a second app version. The second app version is an updated version of the first app version. Cos pods having second app version containers are brought up based on the second app version container image identified in the configuration information. Test and/or warmup traffic is caused to be routed to the second app version containers. Responsive to an indication regarding the routing of the test and/or warmup traffic to the second app version, causing a transition to sending production traffic to the second app version containers instead of to the first app version.

    Systems and techniques for utilizing resource aware queues and/or service sharing in a multi-server environment

    公开(公告)号:US11153371B2

    公开(公告)日:2021-10-19

    申请号:US16568149

    申请日:2019-09-11

    Abstract: Systems and techniques for utilizing resource aware queues and/or service sharing in a multi-server environment. According to an example, an application server employs a traffic light metaphor to represent a utilization level of resources of the application server by associating a traffic light with each resource. A mapping is maintained that associates service requests with corresponding sets of affected traffic lights. A deferred queue is maintained for each traffic light to facilitate throttling of service requests directed to the application server that involve a resource that is under pressure. Responsive to receiving a service request directed to the application server, the service request is added directly or indirectly to one of multiple queues maintained in front of the application server based on a priority associated with the service request. Service requests are serviced from the queues in accordance with a priority associated with the queues.

Patent Agency Ranking