Arrow Research search

Author name cluster

Simon Kasif

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
2 author rows

Possible papers

12

FOCS Conference 2002 Conference Paper

Learning a Hidden Matching

  • Noga Alon
  • Richard Beigel
  • Simon Kasif
  • Steven Rudich
  • Benny Sudakov

We consider the problem of learning a matching (i. e. , a graph in which all vertices have degree 0 or 1) in a model where the only allowed operation is to query whether a set of vertices induces an edge. This is motivated by a problem that arises in molecular biology. In the deterministic nonadaptive setting, we prove a ( 1/2 +o(1))(n/2) upper bound and a nearly matching 0. 32(n/2) lower bound for the minimum possible number of queries. In contrast, if we allow randomness then we obtain (by a randomized, nonadaptive algorithm) a much lower O(n log n) upper bound, which is best possible (even for randomized fully adaptive algorithms).

UAI Conference 1995 Conference Paper

Logarithmic-Time Updates and Queries in Probabilistic Networks

  • Arthur L. Delcher
  • Adam J. Grove
  • Simon Kasif
  • Judea Pearl

In this paper we propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks (causal trees and polytrees). In the conventional algorithms, new evidence in absorbed in time O(1) and queries are processed in time O(N), where N is the size of the network. We propose a practical algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N) at the expense of O(logn N) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases.

AAAI Conference 1986 Conference Paper

On the Parallel Complexity of Some Constraint Satisfaction Problems

  • Simon Kasif

Constraint satisfaction networks have been shown to be a very useful tool for knowledge representation in Artificial Intelligence applications. These networks often utilize local constraint propagation techniques to achieve global consistency (consistent labelling in vision). Such methods have been used extensively in the context of image understanding and interpretation, as well as planning, natural language analysis and commonsense reasoning. In this paper we study the parallel complexity of discrete relaxation, one of the most commonly used constraint satisfaction techniques. Since the constraint propagation procedures such as discrete relaxation appear to operate locally, it has been previously believed that the relaxation approach for achieving global consistency has a natural parallel solution. Our analysis suggests that a parallel solution is unlikely to improve by much the known sequential solutions. Specifically, we prove that the problem solved by discrete relaxation is log-space complete for P (the class of polynomial time deterministic sequential algorithms). Intuitively, this implies that discrete relaxation is inherently sequential and it is unlikely that we can solve the polynomial time version of the consistent labelling problem in logarithmic time by using only a polynomial number of processors. Some practical implications of our result are discussed.