SODA Conference 2024 Conference Paper
Fast Fourier transform via automorphism groups of rational function fields
- Songsong Li
- Chaoping Xing
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
SODA Conference 2024 Conference Paper
FOCS Conference 2024 Conference Paper
Gabidulin codes, serving as the rank-metric counterpart of Reed-Solomon codes, constitute an important class of maximum rank distance (MRD) codes. However, unlike the fruitful positive results about the list decoding of Reed-Solomon codes, results concerning the list decodability of Gabidulin codes in the rank metric are all negative so far. For example, in contrast to Reed-Solomon codes, which are always list decodable up to the Johnson bound in the Hamming metric, Raviv and Wachter-Zeh (IEEE TIT, 2016 and 2017) constructed a class of Gabidulin codes that are not even combinatorially list decodable beyond the unique decoding radius in the rank metric. Proving the existence of Gabidulin codes with good combinatorial list decodability in the rank metric has remained a long-standing open problem. In this paper, we resolve the aforementioned open problem by showing that, with high probability, random Gabidulin codes over sufficiently large alphabets attain the optimal generalized Singleton bound for list decoding in the rank metric. In particular, they achieve list decoding capacity in the rank metric. Our work is significantly influenced by the recent break-throughs in the combinatorial list decodability of Reed-Solomon codes, especially the work by Brakensiek, Gopi, and Makam (STOC 2023). Our major conceptual and technical contributions, which may hold independent interest, consist of the following: (1) We initiate the study of “higher order MRD codes” and provide a novel unified theory, which runs parallel to the theory of “higher order MDS codes” developed by Brakensiek, Gopi, and Makam. (2) We prove a natural analog of the GM-MDS theorem, proven by Lovett (FOCS 2018) and Yildiz and Hassibi (IEEE TIT, 2019), which we call the GM-MRD theorem. In particular, our GMMRD theorem for Gabidulin codes is strictly stronger than the GM-MDS theorem for Gabidulin codes proven by Yildiz and Hassibi.
SODA Conference 2023 Conference Paper
The Gilbert-Varshamov (GV for short) bound has been a benchmark for good Hamming-metric codes. It was even conjectured by some coding theorists that the asymptotic Gilbert-Varshamov bound is tight. The GV bound had remained to be the best asymptotic lower bound for thirty years before it was broken by the Tsfasman-Vlăduţ-Zink bound via algebraic geometry codes. The discovery of algebraic geometry codes by Goppa was a breakthrough in coding theory. After another twenty years, no any improvements on the Tsfasman-Vlăduţ-Zink bound took place before the work by Xing-Elkies [14, 15, 1, 2] in the early of 2000 via tools from algebraic geometry. By using the similar ideas as in [14, 15, 9], some further improvements were given in [7, 17]. Since then, no further progress on asymptotic lower bounds has been made. The main result of this paper is to show that all previous asymptotic lower bounds can be improved in an interval. We present two types of constructions of Hamming-metric codes. Both constructions involve algebraic geometry and need insights on applications of algebraic geometry to coding theory. In order to obtain good codes, one construction requires a larger number of positive divisors of fixed degree, while other construction requires a smaller number of positive divisors of fixed degree. As a result, no matter how large the number of positive divisors of fixed degree is, we can always obtain codes with good parameters. It turns out that all previous asymptotic lower bounds are improved.
SODA Conference 2021 Conference Paper
For an integer q ≥ 2, a perfect q -hash code C is a block code over [ q ] ≔ {1, …, q } of length n in which every subset { c 1, c 2, …, c q } of q elements is separated, i. e. , there exists i ∊ [ n ] such that {proj i ( c 1 ), …, proj i ( c q )} = [ q ], where proj i ( c j ) denotes the i th position of c j. Finding the maximum size M ( n, q ) of perfect q -hash codes of length n, for given q and n, is a fundamental problem in combinatorics, information theory, and computer science. In this paper, we are interested in asymptotical behavior of this problem. More precisely speaking, we will focus on the quantity. A well-known probabilistic argument indicates [10, 12]. This is still the best-known lower bound so far except for the case q = 3 for which Körner and Matron [13] found that the concatenation technique could lead to perfect 3-hash codes that could beat this probabilistic lower bound. This improved lower bound on R 3 was discovered in 1988 and there has been no progress of this lower bound on R q for more than 30 years despite of some work on upper bounds on R q. In this paper we show that this probabilistic lower bound can be improved for q = 4, 8 and all odd integers between 5 and 25, 1 and all sufficiently large q with q (mod 4) ≠ 2. Although we are not able to prove that our construction can beat the probabilistic method for all q with q (mod 4) ≠ 2, the fact that our construction beat the probabilistic method for both small and large q sheds light on that our new construction might beat the previous lower bound for all q with q (mod 4) ≠ 2. Our idea is based on a modified concatenation differing from the concatenation [10] where both the inner and outer codes are separated. In our concatenation, the inner code is not necessarily a perfect q -hash code. This gives a more flexible choice of inner codes and hence we are able to improve the lower bound on R q.
IS Journal 2020 Journal Article
Machine learning relies on the availability of vast amounts of data for training. However, in reality, data are mostly scattered across different organizations and cannot be easily integrated due to many legal and practical constraints. To address this important challenge in the field of machine learning, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical modeling under a data federation. FTL allows knowledge to be shared without compromising user privacy and enables complementary knowledge to be transferred across domains in a data federation, thereby enabling a target-domain party to build flexible and effective models by leveraging rich labels from a source domain. This framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving transfer learning. It is flexible and can be effectively adapted to various secure multiparty machine learning tasks.
SODA Conference 2014 Conference Paper
STOC Conference 2013 Conference Paper
STOC Conference 2012 Conference Paper