-
公开(公告)号:US20160307351A1
公开(公告)日:2016-10-20
申请号:US15189508
申请日:2016-06-22
发明人: Zhenwei ZHANG , Fen XIAO , Jingjing LIU , Wenpei HOU , Ling WANG , Zhehui WU , Xinlei ZHANG
IPC分类号: G06T11/60 , H04L12/58 , G06F3/0484 , G06T7/00 , G06K9/00
CPC分类号: G06T11/60 , G06F3/04842 , G06K9/00362 , G06T2207/20024 , G06T2207/30196 , H04L51/10 , H04L51/24 , H04L51/32 , H04W84/12
摘要: The present invention is applicable to the field of Internet, and provides an interactive method and apparatus based on a web picture. The method includes: obtaining a web picture including a human image; determining a region where a specific part of the human image in the web picture is located; receiving an interactive instruction in the region where the specific part is located, and generating the interactive information corresponding to the specific part. When receiving the interactive instruction in the region where the specific part is located, generating the interactive information corresponding to the specific part by determining the region where the specific part of the human image in the obtained web picture including the human image is located. The interactive method provided by the embodiment of the present invention is simple to operate, and the interactive manners are various.
摘要翻译: 本发明可应用于互联网领域,并提供基于网络图片的交互方法和装置。 该方法包括:获得包括人物图像的网络图像; 确定网页图片中的人体图像的特定部分所在的区域; 在所述特定部分所在的区域中接收交互指令,并生成与所述特定部分相对应的所述交互信息。 当在特定部分所在的区域中接收到交互式指令时,通过确定所获得的包含人物图像的网络图片中的人物的特定部分所在的区域来生成与特定部分相对应的交互式信息。 由本发明的实施例提供的交互方法操作简单,交互方式多种多样。
-
公开(公告)号:US20160110070A1
公开(公告)日:2016-04-21
申请号:US14978373
申请日:2015-12-22
发明人: Qiang FU , Hao WU , Xiaochuan XU , Aidun ZHANG , Youkun HUANG , Xinxing LI , Yutao LI , Xin LIU , Jin CUI , Linwei CHEN , Jingjing LIU
IPC分类号: G06F3/0484 , G06T1/00 , G06F3/0482
CPC分类号: G06F3/04842 , G06F3/0482 , G06F16/58 , G06T1/0007
摘要: A photo collection display method and apparatus are disclosed. The method includes: acquiring photo selection information, the photo selection information comprising: a birth date of a figure in a photo; reading photo information of each photo in a photo collection, the photo information comprising: a photo shooting time, the photo collection recording more than one photo; calculating a difference between a photo shooting time and the birth date of the figure in each photo, to obtain a sorting date of each photo; creating a timeline comprising time points; matching the sorting date with a time point in the timeline, to obtain a link relationship between the sorting date and the corresponding time point in the timeline; and displaying, after one time point in the timeline is triggered, a corresponding photo of the photo collection in a predetermined position of the timeline according to the link relationship.
摘要翻译: 公开了一种照片收集显示方法和装置。 该方法包括:获取照片选择信息,所述照片选择信息包括:照片中图形的出生日期; 在照片收集中阅读每张照片的照片信息,照片信息包括:照片拍摄时间,照片收集记录多张照片; 计算每张照片中的照片拍摄时间和图形的出生日期之间的差异,以获得每张照片的分类日期; 创建一个包含时间点的时间线; 将排序日期与时间轴中的时间点进行匹配,以获得排序日期和时间轴中相应时间点之间的链接关系; 并且在所述时间线中的一个时间点被触发之后,根据所述链接关系在所述时间线的预定位置显示所述照片集合的相应照片。
-
公开(公告)号:US20240248588A1
公开(公告)日:2024-07-25
申请号:US18624698
申请日:2024-04-02
发明人: Ying YE , Yang LI , Lijing YUAN , Licheng ZHENG , Rui WANG , Jingyi ZHOU , Jingjing LIU , Ni SU , Yuyang KUANG
IPC分类号: G06F3/0484 , G06F3/04817 , G06F40/186
CPC分类号: G06F3/0484 , G06F3/04817 , G06F40/186
摘要: A media content creation method includes: displaying, by a terminal device, a main modality editing interface; generating, by the terminal device, main modality media content in response to an editing operation performed on the main modality editing interface; converting, by the terminal device, the main modality media content to target sub-modality media content in response to a modality conversion operation; and displaying, by the terminal device, the generated main modality media content and the target sub-modality media content.
-
4.
公开(公告)号:US20240062744A1
公开(公告)日:2024-02-22
申请号:US18384009
申请日:2023-10-26
发明人: Jingjing LIU , Bihong Zhang
CPC分类号: G10L15/063 , G10L15/02 , G10L15/04 , G10L19/16
摘要: A real-time voice recognition method and a real-time voice recognition model training method are provided. The model training method includes: obtaining an audio feature sequence of sample voice data, the audio feature sequence comprising audio features of a plurality of audio frames of the sample voice data; inputting the audio feature sequence to an encoder of the real-time voice recognition model; chunking the audio feature sequence into a plurality of chunks by the encoder according to a mask matrix; encoding each of the chunks to obtain a hidden layer feature sequence of the sample voice data; decoding the hidden layer feature sequence by a decoder of the real-time voice recognition model to obtain a predicted recognition result for the sample voice data; and training the real-time voice recognition model based on the predicted recognition result and a real recognition result of the sample voice data.
-
-
-