RAFT CONSENSUS VICE LEADER OPTIMIZATION

    公开(公告)号:US20250133131A1

    公开(公告)日:2025-04-24

    申请号:US18491596

    申请日:2023-10-20

    Abstract: Described is an improved system, method, and computer program product for performing elections in a computing system. Approaches are described for the non-leader member of a member set to self-identify to be the vice-leader. When it detects a death, rather than wait the random, bounded period, the vice-leader can immediately send its “vote for me” message to other members. This puts it ahead of the race by other members to announce their candidacies, and results in vastly more frequent conclusion of the election in the initial round.

    MERGING A NEW REGION INTO CLASSIFIED REALMS

    公开(公告)号:US20250133087A1

    公开(公告)日:2025-04-24

    申请号:US18493355

    申请日:2023-10-24

    Abstract: A method may include generating a first cloud network associated with a first security level and including data associated with a service. The method may include generating a second cloud network associated with the first security level and deploying the service and the data associated with the service to the second cloud network and generating a first ingress channel to permit data to be transmitted to the second cloud network. Restricted data associated with a tenant may be deployed to the second cloud network. The method may include generating a third cloud network associated with the first security level and including the service and the data associated with the service and generating a second ingress channel to permit data to be transmitted to the third cloud network. A data sync may be implemented between the second and third cloud networks to deploy the restricted data to the third cloud network.

    SYSTEMS AND METHODS FOR COMPILE-TIME DEPENDENCY INJECTION AND LAZY SERVICE ACTIVATION FRAMEWORK

    公开(公告)号:US20250130782A1

    公开(公告)日:2025-04-24

    申请号:US19005099

    申请日:2024-12-30

    Inventor: Jeffrey Trent

    Abstract: In accordance with an embodiment, described herein are systems and methods for providing a compile-time dependency injection and lazy service activation framework including generation of source code reflecting the dependencies, and which enables an application developer using the system to build microservice applications or cloud-native services. The framework includes the use of a service registry that provides lazy service activation and meta-information associated with one or more services, in terms of interfaces or APIs describing the functionality of each service and their dependencies on other services. An application's use of particular services can be intercepted and accommodated during code generation at compile-time, avoiding the need to use reflection. Extensibility features allow application developers to provide their own templates for code generation, or provide alternative service implementations for use with the application, other than a reference implementation provided by the framework.

    DEVELOPING A PROGRAMMING LANGUAGE MODEL FOR MACHINE LEARNING TASKS

    公开(公告)号:US20250130780A1

    公开(公告)日:2025-04-24

    申请号:US18382018

    申请日:2023-10-19

    Abstract: A method develops a programming language model for machine learning tasks. The method includes adjusting a token list to include a language token used by a tokenizer for a pretrained language model. The pretrained language model includes a set of layers. The set of layers includes a set of initial layers, an embedding layer, and an output layer. The method further includes performing an output layer modification of the output layer to replace the output vector with the embedding vector. The method further includes freezing the set of initial layers to generate a set of frozen layers of the pretrained language model that do not update during training. The method further includes training the pretrained language model using the language token, the output layer modification, and the set of frozen layers to form a fine-tuned model from the pretrained language model.

    Address matching from single string to address matching score

    公开(公告)号:US12282486B2

    公开(公告)日:2025-04-22

    申请号:US17733011

    申请日:2022-04-29

    Abstract: Techniques are described herein for address matching from a single address string to an address matching score. In an embodiment, an address string is received and parsed into parsed address data. Once an address string is parsed into parsed address data, the parsed address data is standardized by converting the parsed address data into a standard format and replacing abbreviations, colloquial names with formal names. Once an address string has been standardized into a standardized street locale, candidate addresses that are identical to or similar to the standardized street locale are identified and are assigned a score. Each score comprises a probability that the respective candidate address and the standardized street locale represent a same place or location.

    INTERCONNECTION OF GLOBAL VIRTUAL PLANES

    公开(公告)号:US20250126080A1

    公开(公告)日:2025-04-17

    申请号:US18912251

    申请日:2024-10-10

    Abstract: A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.

    Providing Secure Wireless Network Access

    公开(公告)号:US20250119739A1

    公开(公告)日:2025-04-10

    申请号:US18482479

    申请日:2023-10-06

    Abstract: Techniques for securely accessing a computer network are described. An access provider sends network access credentials to an access management device. Upon receiving the credentials, the access management device generates an image key that embeds the credentials. The access management device then presents the image key to a client device. The client device receives the image key and extracts the credentials from within the image key. The client device transmits the credentials to the access provider with an authentication request. Based on the credentials included with the authentication request, the access provider attempts to authenticate the client device. If authentication is successful, the access provider grants the client device access to the wireless network and resources accessible via the wireless network.

    APPLICATION-LAYER CONNECTION REDISTRIBUTION AMONG SERVICE INSTANCES

    公开(公告)号:US20250119472A1

    公开(公告)日:2025-04-10

    申请号:US18987990

    申请日:2024-12-19

    Inventor: Rajiv Krishan

    Abstract: The technology disclosed herein enables redistribution of connections among service instances by determining a subset of the connections and terminating the subset. In a particular example, a method includes identifying the application-layer connections established between service instances and peers and identifying a high-load service instance of the service instances. A number of the application-layer connections established with the high-load service instance satisfies load criteria. The method further includes determining a subset of connections from a portion of the application-layer connections connected to the high-load service instance and terminating the subset of connections.

    Machine Learning Based Spend Classification Using Hallucinations

    公开(公告)号:US20250117838A1

    公开(公告)日:2025-04-10

    申请号:US18422321

    申请日:2024-01-25

    Abstract: Embodiments classify a product to one of a plurality of product classifications. Embodiments receive a description of the product and create a first prompt for a trained large language model (“LLM”), the first prompt including the description of the product and contextual information of the product. In response to the first prompt, embodiments use the trained LLM to generate a hallucinated product classification for the product. Embodiments word embed the hallucinated product classification and the plurality of product classifications and similarity match the embedded hallucinated product classification with one of the embedded plurality of product classifications. The matched one of the embedded plurality of product classifications is determined to be a predicted classification of the product.

Patent Agency Ranking