Current plotting tools are inadequate for revealing the distributions of large, complex datasets, both because of technical limitations and because the results vary dramatically depending on the dataset itself. Avoiding these problems requires either prior knowledge of the distribution or tedious trial-and-error parameter adjustment, neither of which is necessarily feasible for the data now being collected. The new datashader library (https://github.com/bokeh/datashader) makes it practical to work with data at a large scale, easily and interactively visualizing millions or billions of points. In this talk, we'll demonstrate how datashader provides a flexible pipeline for data processing that allows automatic or custom-defined algorithms at every stage. Datashader makes it easier to reveal the underlying structure of the dataset and to focus on the specific aspects of interest.