Abstract:
A multi-layer composite precursor is provided comprising a substrate, wherein the substrate comprises a light emitting organic compound, a first surface, and a second surface, wherein the second surface is superimposed by a transparent electrically conducting layer, a liquid phase superimposing at least a part of the first surface comprising a metal-organic compound, wherein the metal-organic compound comprises an organic moiety, wherein the organic moiety comprises a C═O group; and wherein the liquid phase further comprises a first silicon compound, wherein the first silicon compound comprises at least one carbon atom and at least one nitrogen atom.
Abstract:
A method for eliminating hologram DC noise and a hologram device using the same are provided. The method for processing the hologram includes: receiving input of hologram data; and implementing a differential operation with respect to the hologram data. Accordingly, the hologram data is processed by implementing the differential operation with respect to the hologram data, so that DC noise occurring when the hologram is reconstructed can be effectively eliminated.
Abstract:
The present invention is an architecture of wrapped core linking module for accessing system on chip test which maintains compatibility of the IEEE 1149.1 standard with not only an IEEE 1149.1 boundary scan but also cores embodied by an IEEE P1500 wrapper and is able to systematically access the system on chip test with expandability. Thus, the wrapped core linking module in accordance with this present invention includes a link control register for storing the link control configuration between cores in the scan path of a system on chip according to control signals applied from the outside boundary, a link control register controller activating said link control register for controlling to shift and update the link configuration, a switch for setting the scan path between wrapped cores based on the link control configuration of said link control register and an output logic for connecting said link control register to the test data out (TDO) of the chip in case of testing the system on chip or cores of the system on chip.
Abstract:
There is provided a dynamic data block caching automation application method for high-speed data access based on a computational storage. A query execution method according to an embodiment includes the steps of: synchronizing, by a DBMS, an ECC which is a cache of the DBMS and an ICC which is a cache of a computational storage in which a DB is established; generating an offloading execution code that defines operation information necessary for query computation offloading based on a query requested by a client; and processing the offloading execution code by using the ECC and the ICC which are synchronized. Accordingly, a load even in a CSD for reducing a load of a DBMS is reduced through snippet offloading reduction, snippet processing reduction, and high-speed query processing is enabled by disk I/O optimized data access.
Abstract:
Proposed is a computing device for operating an artificial intelligence (AI) service. The computing device may include a local intelligence storage, in which a deep learning model for AI applications based on a container is stored. The computing device may also include a local intelligence/model management unit configured to scan the local intelligence storage to identify information on the deep learning model installed in the computing device and provide the identified information on the deep learning model through a network interface. The local intelligence/model management unit may manage the deep learning model to be stored on the local intelligence storage that is independent from the container.
Abstract:
The present invention relates to an image inpainting apparatus and an image inpainting method, the image inpainting apparatus including: a background inpainting part configured to generate a background-inpainted image by carrying out inpainting on a background with respect to an input image in which a region to be inpainted is set up; an object inpainting part configured to generate an object image by carrying out inpainting on an object; and an image overlapping part configured to generate an output image by causing the background-inpainted image and the object image, which are generated, to overlap each other.
Abstract:
There is provided a query execution method in a DB system in which a plurality of CSDs are used as a storage. According to an embodiment, a query execution method includes: generating snippets for offloading a part of query computations for a query received from a client to CSDs; scheduling the generated snippets for the CSDs; collecting results of offloading; and merging the collected results of offloading. Accordingly, by dividing query computations, offloading, and processing in parallel, while processing query computations that are inappropriate for offloading by a DBMS, a query request from a client can be executed effectively and rapidly.
Abstract:
This application relates to a pellicle for extreme ultraviolet lithography based on yttrium (Y) and used in a lithography process using extreme ultraviolet rays. In one aspect, the pellicle includes a pellicle layer including a core layer formed of an yttrium-based material expressed as Y-M (M is one of B, Si, O, or F).
Abstract:
A conductive yarn pressure sensor is proposed. The pressure sensor may include a porous fiber layer having predetermined cavities formed therein. The pressure sensor may also include a first sensing electrode made of a first conductive yarn formed on one surface of the porous fiber layer, and a second sensing electrode made of a second conductive yarn formed on the other surface of the porous fiber layer. The first sensing electrode or the second sensing electrode may be provided so as to be in contact with each other in the cavities of the porous fiber layer due to external pressure. According to an embodiment, by having conductive yarn in flexible clothing or textile material, pressure can be sensed by effectively responding to deformation due to external pressure.
Abstract:
The present invention relates to a pop count-based deep learning neural network computation method, a multiply accumulator, and a device thereof. The computation method according to an exemplary embodiment of the present invention is a computation method for a deep learning neural network, including a step of generating one-hot encoding codes according to the type of first multiplication result values for a multiplication (first multiplication) of weights (W) and input values (A); a step of performing a pop-count for each generated code; and a step of accumulating result values for a constant multiplication (second multiplication) between each type of the first multiplication result value and each count value of the pop-count which are different constant values.