-
公开(公告)号:US20180276120A1
公开(公告)日:2018-09-27
申请号:US15624687
申请日:2017-06-15
Applicant: Microsoft Technology Licensing, LLC
Inventor: Dimitrios VYTINIOTIS , Manuel Silverio da Silva COSTA , Kapil VASWANI , Matthew John PARKINSON , Piyus Kumar KEDIA
IPC: G06F12/02
CPC classification number: G06F12/0261 , G06F12/023 , G06F12/0253 , G06F12/109 , G06F12/145 , G06F2212/1016 , G06F2212/1044 , G06F2212/1052 , G06F2212/657
Abstract: A method of manual memory management is described. In response to detecting an access violation triggered by the use of an invalid reference to an object in a manual heap, a source of the access in a register or stack is identified. An updated reference for the object using stored mapping data is determined and used to replace the invalid reference in the source.
-
公开(公告)号:US20180253311A1
公开(公告)日:2018-09-06
申请号:US15615757
申请日:2017-06-06
Applicant: Microsoft Technology Licensing, LLC
Inventor: Matthew John PARKINSON , Manuel Silverio da Silva COSTA , Dimitrios VYTINIOTIS , Kapil VASWANI
CPC classification number: G06F9/30123 , G06F9/30116
Abstract: A method of manual memory management is described which comprises enabling one or more threads to access an object created in a manual heap by storing a reference to the object in thread-local state and subsequently deleting the stored reference after accessing the object. In response to abandonment of the object, an identifier for the object and a current value of either a local counter of a thread or a global counter are stored in a delete queue and all threads are prevented from storing any further references to the object in thread-local state. Deallocation of the object only occurs when all references to the object stored in thread-local state for any threads have been deleted and a current value of the local counter for the thread or the global counter has incremented to a value that is at least a pre-defined amount more than the stored value, wherein the global counter is updated using one or more local counters.
-
公开(公告)号:US20240419967A1
公开(公告)日:2024-12-19
申请号:US18814438
申请日:2024-08-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ryota TOMIOKA , Matthew Alastair JOHNSON , Daniel Stefan TARLOW , Samuel Alexander WEBSTER , Dimitrios VYTINIOTIS , Alexander Lloyd GAUNT , Maik RIECHERT
Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
-
公开(公告)号:US20220222531A1
公开(公告)日:2022-07-14
申请号:US17706586
申请日:2022-03-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ryota TOMIOKA , Matthew Alastair JOHNSON , Daniel Stefan TARLOW , Samuel Alexander WEBSTER , Dimitrios VYTINIOTIS , Alexander Lloyd GAUNT , Maik RIECHERT
Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
-
公开(公告)号:US20180336458A1
公开(公告)日:2018-11-22
申请号:US15599058
申请日:2017-05-18
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ryota TOMIOKA , Matthew Alastair JOHNSON , Daniel Stefan TARLOW , Samuel Alexander WEBSTER , Dimitrios VYTINIOTIS , Alexander Lloyd GAUNT , Maik RIECHERT
CPC classification number: G06N3/063
Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
-
-
-
-