-
公开(公告)号:US20230282316A1
公开(公告)日:2023-09-07
申请号:US17807685
申请日:2022-06-17
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sara MALVAR MAUA , Leonardo DE OLIVEIRA NUNES , Mirco MILLETARI' , Neera Bansal TALBERT , Yazeed Khalid ALAUDAH , Jeremy Randall REYNOLDS , Yagna Deepika ORUGANTI , Ashish BHATIA , Anirudh BADAM
Abstract: A method for source attribution comprises receiving measurements of a chemical species at a spatially distributed sensor array for a given set of spatially positioned emission sources in a physical environment using a dispersion model. Based on the received measurements, a concentration field is mapped from the emission sources to the sensor array using a forward operator. For each emission source, a likelihood data set is evaluated at least by fitting an emission rate of the chemical species using a regression model based on the mapped concentration field and real-world, runtime measurements from the sensor array. A posterior data set is evaluated based at least on the evaluated likelihood data set and historical data for the physical environment. For each sensor of the sensor array, estimated emission rates and contribution rankings for emission sources are determined and output based on the evaluation of the posterior data set.
-
公开(公告)号:US20230129665A1
公开(公告)日:2023-04-27
申请号:US17457874
申请日:2021-12-06
Applicant: Microsoft Technology Licensing, LLC
Inventor: Peeyush KUMAR , Hui Qing LI , Vaishnavi NATTAR RANGANATHAN , Lillian Jane RATLIFF , Ranveer CHANDRA , Vishal JAIN , Michael McNab BASSANI , Jeremy Randall REYNOLDS
Abstract: A computing system including a processor configured to receive training data including, for each of a plurality of training timesteps, training forecast states associated with respective training-phase agents included in a training supply chain graph. The processor may train a reinforcement learning simulation of the training supply chain graph using the training data via policy gradient reinforcement learning. At each training timestep, the training forecast states may be shared between simulations of the training-phase agents during training. The processor may receive runtime forecast states associated with respective runtime agents included in a runtime supply chain graph. For a runtime agent, at the trained reinforcement learning simulation, the processor may generate a respective runtime action output associated with a corresponding runtime forecast state of the runtime agent based at least in part on the runtime forecast states. The processor may output the runtime action output.
-