Grammar-Based Compression in a Streaming Model
We show that, given a string $s$ of length $n$, with constant memory and
logarithmic passes over a constant number of streams we can build a
context-free grammar that generates $s$ and only $s$ and whose size is within
an $\Oh{\min (g \log g, \sqrt{n \log g})}$-factor of the minimum $g$. This
stands in contrast to our previous result that, with polylogarithmic memory and
polylogarithmic passes over a single stream, we cannot build such a grammar
whose size is within any polynomial of $g$.