NeuroSketch: A Neural Network Method for Fast and Approximate Evaluation of Range Aggregate Queries
Sepanta Zeighami, Vatsal Sharan, Cyrus Shahabi
SIGMOD, 2023
arXiv |
pdf
Compressed Factorization: Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data
What learning algorithms can be run directly on compressively-sensed data? If we compress a high dimensional matrix via a random projection, then what is the relationship between the factors of the original matrix and the factors of the compressed matrix? While it is well-known that random projections preserve a number of geometric properties of a dataset, this work shows that random projections can also preserve certain solutions to non-convex, NP-Hard problems like non-negative matrix factorization---both in theory and in practice.
Jump to 53:20 in this
video.
Efficient Anomaly Detection via Matrix Sketching
Vatsal Sharan, Parikshit Gopalan, Udi Wieder
NeurIPS, 2018
abstract |
pdf |
arXiv
PCA based anomaly scores are commonly used for finding anomalies in high-dimensional data. The naive algorithms for computing these scores explicitly compute the PCA of the covariance matrix which uses space quadratic in the dimensionality of the data. Using matrix sketching tools and new matrix perturbation inequalities, we give the first streaming algorithms for computing these scores that use space that is linear or sublinear in the dimension.
Moment-Based Quantile Sketches for Efficient High Cardinality Aggregation Queries
Edward Gan, Jialin Ding, Kai Sheng Tai, Vatsal Sharan, Peter Bailis
VLDB, 2018
abstract |
pdf |
arXiv |
code
We propose a compact and efficiently mergeable sketch for answering quantile queries on a dataset. This data structure, which we refer to as the moments sketch, operates with a small memory footprint (200 bytes) and achieves computationally efficient (50ns) merges by tracking only a set of summary statistics, notably the sample moments.
Prediction with a Short Memory
When is it necessary to remember significant information from the distant past to make good predictions about the future in sequential prediction tasks? How do we consolidate and reference memories about the past in order to predict the future? This work studies these questions, and the findings are perhaps counterintuitive: we show that a simple (Markov) model which bases its predictions on only the most recent observations and the statistics of short windows, can achieve small error on average with respect to a large class of sequences, such as those generated by Hidden Markov Models (HMMs). This shows that long-term memory may not be necessary for making good predictions on average, even on sequences generated by complex processes that have long-range dependencies.
Sketching Linear Classifiers over Data Streams
Kai Sheng Tai, Vatsal Sharan, Peter Bailis, Gregory Valiant
SIGMOD, 2018
abstract |
pdf |
arXiv |
code
We introduce a new sub-linear space sketch---the Weight-Median Sketch---for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the classifier. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We establishes recovery guarantees for our sketch and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods.