Abstract:
A category selection portion selects a face orientation based on an error between the positions of feature points (the eyes and the mouth) on the faces of each face orientation and the positions of feature points, corresponding to the feature points on the faces of each category, on the face of a collation face image. A collation portion collates the registered face images of the face orientation selected by the category selection portion and the collation face image with each other, and the face orientations are determined so that face orientation ranges where the error with respect to each individual face orientation is within a predetermined value are in contact with each other or overlap each other. The collation face image and the registered face images can be more accurately collated with each other.
Abstract:
A collator includes at least one processor and a storage unit storing a plurality of registered face images, the processor performs a partial collation for collating a feature quantity of a first target area excluding partial areas in each of the plurality of registered face images with a feature quantity of a second target area excluding a partial area in a search face image to be searched, and displays a result of the partial collation.
Abstract:
A visitor management system, which can accurately determine if a visitor presents a proper ticket for the visitor's personal attribute, includes: a reader configured to read ticket information in a ticket medium; a camera configured to capture an image of the visitor; a control device configured to acquire a personal attribute of the visitor from the captured image, determine if consistency is achieved between the ticket information from the reader and the acquired personal attribute, and generate a management screen including determination result information; and a display configured to display the management screen. The determination result information generated by the control device may include a number of persons for each ticket type and a number of persons for each personal attribute, and inconsistency information indicating that consistency is not achieved.
Abstract:
Provided is a cosmetic assist device that can extract a cosmetic technique for making a user's face look like a target face image. The device includes an image capturing unit for capturing a face image of a user, an input unit for inputting a target face image, a synthesized face image generating unit for generating a plurality of synthesized face images obtained by applying mutually different cosmetic techniques on the face image of the user, a similarity determination unit for determining a degree of similarity between each synthesized face image and the target face image, and a cosmetic technique extraction unit for extracting one of the cosmetic techniques that was used to obtain the synthesized face image determined to have a highest degree of similarity by the similarity determination unit.