Arrow Research search

Author name cluster

Sumio Fujita

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2019 Conference Paper

Stochastic Submodular Maximization with Performance-Dependent Item Costs

  • Takuro Fukunaga
  • Takuya Konishi
  • Sumio Fujita
  • Ken-ichi Kawarabayashi

We formulate a new stochastic submodular maximization problem by introducing the performance-dependent costs of items. In this problem, we consider selecting items for the case where the performance of each item (i. e. , how much an item contributes to the objective function) is decided randomly, and the cost of an item depends on its performance. The goal of the problem is to maximize the objective function subject to a budget constraint on the costs of the selected items. We present an adaptive algorithm for this problem with a theoretical guarantee that its expected objective value is at least (1 − 1/ 4 √ e)/2 times the maximum value attained by any adaptive algorithms. We verify the performance of the algorithm through numerical experiments.

AAAI Conference 2018 Conference Paper

AdaFlock: Adaptive Feature Discovery for Human-in-the-loop Predictive Modeling

  • Ryusuke Takahama
  • Yukino Baba
  • Nobuyuki Shimizu
  • Sumio Fujita
  • Hisashi Kashima

Feature engineering is the key to successful application of machine learning algorithms to real-world data. The discovery of informative features often requires domain knowledge or human inspiration, and data scientists expend a certain amount of effort into exploring feature spaces. Crowdsourcing is considered a promising approach for allowing many people to be involved in feature engineering; however, there is a demand for a sophisticated strategy that enables us to acquire good features at a reasonable crowdsourcing cost. In this paper, we present a novel algorithm called AdaFlock to efficiently obtain informative features through crowdsourcing. AdaFlock is inspired by AdaBoost, which iteratively trains classifiers by increasing the weights of samples misclassified by previous classifiers. AdaFlock iteratively generates informative features; at each iteration of AdaFlock, crowdsourcing workers are shown samples selected according to the classification errors of the current classifiers and are asked to generate new features that are helpful for correctly classifying the given examples. The results of our experiments conducted using real datasets indicate that AdaFlock successfully discovers informative features with fewer iterations and achieves high classification accuracy.

IJCAI Conference 2017 Conference Paper

Online Optimization of Video-Ad Allocation

  • Hanna Sumita
  • Yasushi Kawase
  • Sumio Fujita
  • Takuro Fukunaga

In this paper, we study the video advertising in the context of internet advertising. Video advertising is a rapidly growing industry, but its computational aspects have not yet been investigated. A difference between video advertising and traditional display advertising is that the former requires more time to be viewed. In contrast to a traditional display advertisement, a video advertisement has no influence over a user unless the user watches it for a certain amount of time. Previous studies have not considered the length of video advertisements, and time spent by users to watch them. Motivated by this observation, we formulate a new online optimization problem for optimizing the allocation of video advertisements, and we develop a nearly (1 − 1/e)-competitive algorithm for finding an envy-free allocation of video advertisements.