Summarization of Cinnamon’s Article in ICPR and SIGGRAPH 2020

Cinnamon AI
3 min readAug 7, 2020

--

So far in 2020, we have submitted 5 papers to SIGGRAPH and ICPR and got accepted. This is a big step for the research team of Cinnamon AI in the area of Flax (Cinnamon’s OCR product) and Computer Vision. Below is the summarization of Cinnamon’s articles in ICPR and SIGGRAPH 2020.

1. Correspondence Neural Network for Line Art Colorization

Conference: SIGGRAPH 2020

Authors: Trung Dang (Tyler), Thien Do (Hades), Anh Nguyen (Cat), Van Pham (Kan), Quoc Nguyen (Akari), Bach Hoang (Gale), Giao Nguyen (Enzo)

We propose to colorize a sketch by matching its components with the reference. Each component is transformed into a hidden representation by a neural network, which in turn, is used to find its corresponding parts. We used a customized objective function for the neural network to learn mapping between them. This led to major accuracy improvement in comparison with traditional computer vision methods.

2. LODENet: Logographic Decomposition Network for offline Handwritten Text Line Recognition

Conference: ICPR 2020 v vàAuthzors: Huu-Tin Hoang* (Tin), Chun-Jen Peng* (Larry) Hung Vinh Tran (Xing), Hung Le (Toni), Huy Hoang Nguyen (Robert)

Figure: The architecture of LODENet includes a CRNN network for predicting radicals and a conversion network for predicting logograms. The weighted sum of the 2 CTClosses allows for an end-to-end training manner.

A huge set of characters in comparison with alphabetical languages creates challenges in text recognition of handwritten logograms-based languages such as Japanese or Chinese. In order to reduce the memory consumption for deep-learning based recognition systems, we propose LOgographic DEComposition, which is a novel encoding that treats each character as a composition of radicals and basic components and a deep learning model to leverage this encoding. Experiments demonstrated the state-of-the-art level performance of our model in both accuracy and training time efficiency (which lead to faster deployment and tuning).

3. End-to-End Hierarchical Relation Extraction for Generic Form Understanding

Conference: ICPR 2020

Author: Tuan Anh Nguyen Dang (Tadashi), Duc-Thanh Hoang (Kris), Quang Bach Tran (neath), Chih-Wei Pan (Blake), Thanh-Dat Nguyen (Marc)

Figure: MSAU-PAF Architecture and Output

“End-to-End Hierarchical Relation Extraction” is a major improvement from our previous paper — Multi-stage Attentional U-Net, combining the robustness of our previous work and the flexibility of another. MSAU-PAF predicts which terms appearing in the document is key or value and their corresponding connection at the same time, in a single deep-learning model.

4. Anime Sketch Colorization by Component-based Matching using Deep Appearance Features and Graph Representation

Conference: ICPR 2020 (Under major revision)

Author: Thien Do (Hades), Van Pham (Kan), Anh Nguyen (Cat), Trung Dang (Tyler), Quoc Nguyen (Akari), Bach Hoang (Gale), Giao Nguyen (Enzo)

This paper presents a new framework for automatic image colorization. The work first extract components from a reference colorized images and matching these components to their corresponding location in un-colorized frames. The novelty lies in the matching between colorized and un-colorized components by encoding the components with a deep learning architecture then using a graph representing the relation between the components for matching.

5. Facial Expression Recognition Using Residual Masking Network

Conference: ICPR 2020

Author: Luan Pham (luan), The Huynh Vu (Ben), Tuan Anh Tran (Tommy)

Figure: The Residual Masking Network Architecture

In this paper, we proposed a new tuning technique (using masking block) to boost the performance of the state-of-the-art deep learning-based approach in facial expression recognition.

--

--

Cinnamon AI
Cinnamon AI

Written by Cinnamon AI

Cinnamon AI is the pioneer company in consulting and designing innovative solutions using AI technology for business. Follow our official Medium for more :)

No responses yet