Arrow Research search

Author name cluster

Licheng Yu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

ICLR Conference 2023 Conference Paper

RoPAWS: Robust Semi-supervised Representation Learning from Uncurated Data

  • Sangwoo Mo
  • Jong-Chyi Su
  • Chih-Yao Ma
  • Mahmoud Assran
  • Ishan Misra
  • Licheng Yu
  • Sean Bell

Semi-supervised learning aims to train a model using limited labels. State-of-the-art semi-supervised methods for image classification such as PAWS rely on self-supervised representations learned with large-scale unlabeled but curated data. However, PAWS is often less effective when using real-world unlabeled data that is uncurated, e.g., contains out-of-class data. We propose RoPAWS, a robust extension of PAWS that can work with real-world unlabeled data. We first reinterpret PAWS as a generative classifier that models densities using kernel density estimation. From this probabilistic perspective, we calibrate its prediction based on the densities of labeled and unlabeled data, which leads to a simple closed-form solution from the Bayes' rule. We demonstrate that RoPAWS significantly improves PAWS for uncurated Semi-iNat by +5.3% and curated ImageNet by +0.4%.

ICRA Conference 2021 Conference Paper

Assistive supernumerary grasping with the back of the hand

  • Jungpyo Lee
  • Licheng Yu
  • Lucie Derbier
  • Hannah S. Stuart

The Dorsal Grasper, an assistive wearable grasping device, incorporates supernumerary fingers and an artificial palm with the forearm and back of the hand, respectively. It enables power wrap grasping and adduction pinching with its V-shaped soft fingers. Designed with C6/C7 spinal cord injury in mind, it takes advantage of active wrist extension that remains in this population after injury. We propose that allowing the operator to actively participate in applying grasp forces on the object, using the back of the hand, enables intuitive, fast and reliable grasping relevant for the execution of activities of daily living. Functional grasping is tested in three normative subjects and a person with C6 SCI using the Grasp and Release Test. Results indicate that this device provides promising performance on a subset of objects that complements the existing compensatory strategies used by people with C6/C7 SCI. We find that the addition of the artificial palm is important for increasing maximum grip strength, by increasing contact friction and protecting the opisthenar.

NeurIPS Conference 2021 Conference Paper

VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation

  • Linjie Li
  • Jie Lei
  • Zhe Gan
  • Licheng Yu
  • Yen-Chun Chen
  • Rohit Pillai
  • Yu Cheng
  • Luowei Zhou

Most existing video-and-language (VidL) research focuses on a single dataset, or multiple datasets of a single task. In reality, a truly useful VidL system is expected to be easily generalizable to diverse tasks, domains, and datasets. To facilitate the evaluation of such systems, we introduce Video-And-Language Understanding Evaluation (VALUE) benchmark, an assemblage of 11 VidL datasets over 3 popular tasks: (i) text-to-video retrieval; (ii) video question answering; and (iii) video captioning. VALUE benchmark aims to cover a broad range of video genres, video lengths, data volumes, and task difficulty levels. Rather than focusing on single-channel videos with visual information only, VALUE promotes models that leverage information from both video frames and their associated subtitles, as well as models that share knowledge across multiple tasks. We evaluate various baseline methods with and without large-scale VidL pre-training, and systematically investigate the impact of video input channels, fusion methods, and different video representations. We also study the transferability between tasks, and conduct multi-task learning under different settings. The significant gap between our best model and human performance calls for future study for advanced VidL models. VALUE is available at https: //value-benchmark. github. io/.

AAAI Conference 2015 Conference Paper

Dictionary Learning with Mutually Reinforcing Group-Graph Structures

  • Hongteng Xu
  • Licheng Yu
  • Dixin Luo
  • Hongyuan Zha
  • Yi Xu

In this paper, we propose a novel dictionary learning method in the semi-supervised setting by dynamically coupling graph and group structures. To this end, samples are represented by sparse codes inheriting their graph structure while the labeled samples within the same class are represented with group sparsity, sharing the same atoms of the dictionary. Instead of statically combining graph and group structures, we take advantage of them in a mutually reinforcing way — in the dictionary learning phase, we introduce the unlabeled samples into groups by an entropy-based method and then update the corresponding local graph, resulting in a more structured and discriminative dictionary. We analyze the relationship between the two structures and prove the convergence of our proposed method. Focusing on image classification task, we evaluate our approach on several datasets and obtain superior performance compared with the state-of-the-art methods, especially in the case of only a few labeled samples and limited dictionary size.