Tag Archives: pivot

Parallel Coordinates via Pivot and LOD Expressions

Parallel coordinates are a useful chart type for comparing a number of variables at once across a dimension. They aren’t a native chart type in Tableau, but have been built at different times, here’s one by Joe Mako that I use in this post for the data and basic chart. The data is a set of vehicle attributes from the 1970s, I first saw it used in this post from Robert Kosara. This post updates the method Joe used with two enhancements that make the parallel coordinates plot easier to create and more extensible, namely pivot and Level of Detail Expressions.

The major challenge in creating a parallel coordinates chart is getting all the ranges of data for each variable into a common scale. The easiest away to do this is to linearly scale each measure to a range from 0-1, the equation is of the form (x – min(x))/(max(x)-min(x)). Once that scale is made then laying out the viz only needs 4 pills to get the initial chart:

Screen Shot 2016-07-08 at 3.25.20 PM

The Category is a dimension holding the different variables, ID identifies the different cars in this case, the Value Scaled is the scaled measure that draws the axis. Value Scaled is hidden in the tooltips while Value is used in the tooltips.

Where this is easier to create is using Tableau’s pivot function, in Joe’s original version the data is in a “wide” format like this:

Screen Shot 2016-07-08 at 3.20.33 PMSo for each of the measures a calculation had to be built, and then the view was built using Measure Names and Measure Values:

Screen Shot 2016-07-08 at 3.47.05 PM

The major limitation here is in the tooltips (in fact, Joe had rightly hidden them in the original, they were so useless):

Screen Shot 2016-07-08 at 3.33.36 PM

The tooltip is showing the scaled value, not the actual value of acceleration. This is a limitation of Tableau’s Measure Names/Measure Values pills…If I put the other measures on the tooltip then I see all of them for every measure and it’s harder to identify the one I’m looking at. Plus axis ranges are harder to describe.

Pivoting Makes A Dimension

I think of Tableau’s Measure Names as a form of pivoting the data, to create a faux dimension. I write faux because beyond the limits mentioned above we can’t group Measure Names, we can’d blend on Measure Names, we can’t do cascading filters on Measure Names, etc. The workaround is to pivot our data so we turn those columns of measures into rows and get an actual “Pivot field names” dimension (renamed to Category in my case) and a single “Pivot field values” measure (renamed to Value in my case):

Screen Shot 2016-07-08 at 3.45.22 PM

Then for the scaling we can use a single calculation (instead of one for every original column), here’s the Value Scaled measure’s formula:

([Value] - {EXCLUDE [ID] : MIN([Value])})/
({EXCLUDE [ID] : MAX([Value])} - {EXCLUDE [ID] : MIN([Value])})

I used an EXCLUDE Level of Detail Expression here rather than a TOTAL() table calculation as an example of how we can use LODs to replace table calculations and have a simpler view because we don’t have to set the compute using of the table calculation.

Now with a real Category dimension in the view the Value Scaled calc is computed for each Category & ID, and this also means that if we put the Value measure in the view then that is computed for each Category & ID as well, immediately leading to more usable tooltips:

Screen Shot 2016-07-08 at 3.55.53 PM

For a quick interactive analysis this view takes just a couple of minutes to set up and the insights can be well worth the effort. Prior to the existence of Pivot and LOD expressions this view would have taken several times as long to create, so for me this revised method takes this chart type from “do I want to?” to “why not??”

Cleaning Up

To put this on a dashboard some further cleanup and additions are necessary. Identifying the axis ranges is something that is easier as well with the pivoted data. In this case I used a table calculation to identify the bottom and top-most marks in each axis and used that as mark labels to identify the axis range:

Screen Shot 2016-07-08 at 3.58.04 PM

The Value for Label calculation has the formula:

IF FIRST()==0 OR LAST()==0 THEN
    SUM([Value])
END

The addressing is an advanced Compute Using so that it identifies the very first or last mark in each Category based on the value:

Screen Shot 2016-07-08 at 4.01.00 PM

In addition I created two different versions of the value pill that each had different number formatting and used those on the tooltips, used Joe’s original parameters for setting the color and sort order with revised calculations (which were also easier to use since Category is a dimension), and finally added a couple of other worksheets to be the target of a Filter Action to show details of the vehicle:
Screen Shot 2016-07-08 at 4.02.54 PM

Click on the image above to download the workbook from Tableau Public.

The Letdown and the Pivot

The Letdown

Tableau does amazing demos. Fire up the software, connect to a data source, select a couple pills, click Show Me, boom there’s a view. Do a little drag and drop, boom, another view. Duplicate that one, boom, another view to rearrange. Within three minutes or less you can have a usable dashboard, for 200 rows of data or 200 million.

Screen Shot 2014-04-16 at 6.29.57 AMIf you’ve seen those demos, the not-so-dirty little secret of Tableau is that they pretty much all start with clean, well-formatted, analytics-ready data sources. As time goes on, I’ve interacted with more and more new Tableau users who are all fired up by what they saw in the demos, and then let down when they can’t immediately do that with their own data. They’ve got to reshape the data, learn some table calcs right away, or figure out data blending to deal with differing levels of granularity, and/or put together their first ever SQL query to do a UNION or a cross product, etc. Shawn Wallwork put it this way in a forum thread back in January: “On the one hand Tableau is an incredibly easy tool to use, allowing the non-technical, non-programmers, non-analysis to explore their data and gain useful insights. Then these same people want to do something ‘simple’ like a sort, and bang they hit the Table Calculation brick wall…”

I work with nurses and doctors who are smart, highly competent people who daily make life or death decisions. Give them a page of data and they all know how to draw bar charts, line charts, and scatterplots with that data. They can compute means and medians, and with a little help get to standard deviations and more. But hand them a file of messy data and they are screwed, they end up doing a lot of copy & paste, or even printing out the file to manually type the data in a more usable format. The spreadsheet software they are used to (hello, Excel) lets them down…

…and so does Tableau.

A data analyst like myself can salivate over the prospect of getting access to our call center data and swooping and diving through hundreds of thousands of call records looking for patterns. However, the call center manager might just want to know if the outgoing reminder calls are leading to fewer missed appointments. In other words, the call center manager has a job to do, that leads to a question she wants to answer, and she doesn’t necessarily care about the tool, the process, or the need to tack on a few characters as a prefix to the medical record number to make it correspond to what comes out of the electronic medical record system; she just wants an answer to her question so she can do her job better. To the degree that the software doesn’t support her needs, there has to be something else to help her get her job done.

The Pivot

When Joe Mako and I first talked about writing a book together, our vision was to write “the book” on table calculations and advanced use cases for Tableau. We wanted (and still want) to teach people *how* to build the crazy-awesome visualizations that we’ve put together, and how they can come up with their own solutions to the seemingly-intractable and impossible problems that get posted on the Tableau forums and elsewhere. And we’ve come to realize that there is a core set of understandings about data and how Tableau approaches data that are not explicitly revealed in the software nor well-covered in existing educational materials. Here are a few examples:

  • Spreadsheets can have a table of data, so do databases (we’ll leave JSON and XML data sources out of the mix for the moment). But spreadsheet tables and database tables are very different: Spreadsheet tables are very often formatted for readability by humans with merged cells and extra layers of headers that don’t make sense to computers. A single column in a spreadsheet can have many different data types and cells with many meanings, whereas databases are more rigid in their approach. We tend to assume that new users know this, and then they get confused when their data has a bunch of Null values because the Microsoft Jet driver assumed the column starting with numbers was numeric, and wiped out the text values.
  • Screen Shot 2014-04-16 at 6.09.22 AMWe—Tableau users who train and help other users—talk about how a certain data sets are “wide” vs. “tall”, and that tall data is (usually) better for Tableau, but we don’t really talk about what are the specific characteristics of the data and principles involved that in a way that new Tableau users who are non-data analysts can understand and apply those principles themselves to arrange their data for best use in Tableau.
  • Working with Tableau, we don’t just need to know the grain of the data–what makes a unique row in the data–we also need to understand the grain of the view–the distinct combinations of values of the dimensions in the view. There can be additional grains involved when we start including features like data blending and top filters. Even “simple” aggregations get confusing when we don’t understand the data or Tableau well enough to  make sense of how adding a dimension to the view can change the granularity.

Carnation, Lily, Lily, Rose by John Singer Sargent, from WikiMedia CommonsJust as we can’t expect to be a brilliant painter without an understanding of the interplay between color and light, we can’t expect to be a master of Tableau without a data- and Tableau- specific set of understandings. Therefore, we’ve been pivoting our writing to have more focus on these foundational elements. When they are in place, then doing something like a self-blend to get an unfiltered data source for a Filter Action becomes conceivable and implementable.

Screen Shot 2014-04-16 at 6.10.37 AMThis kind of writing takes time to research, think about, synthesize, and explain. I’ve been reading a lot of books, trawling through painfully difficult data sets, filling up pages with throw-away notes & diagrams, and always trying to keep in mind the nurses and doctors I work with, the long-time Tableau users who tell me that they still “don’t get” calculated fields in Tableau (never mind table calcs), and the folks I’m helping out on the Tableau forums. So “the book” is going slower than I’d hoped, and hopefully will be the better for it.

If you’d like a taste of this approach, I’ll be leading a hands-on workshop on pill types and granularity at this month’s Boston Tableau User Group on April 29.

Postscript #1: I’m not the only person thinking about this. Kristi Morton, Magdalena Balazinska, Dan Grossman (of the University of Washington), and Jock Mackinlay (of Tableau) have published a new paper Support the Data Enthusiast: Challenges for Next-Generation Data-Analysis Systems. I’m looking forward to what might come out of their research.

Postscript #2: This post wouldn’t have been possible without the help (whether they knew it or note) of lots of other smart people, including: Dan Murray, Shawn Wallwork, Robin Kennedy, Chris Gerrard, Jon Boeckenstedt, Gregory Lewandoski, and Noah Salvaterra. As I was writing this post, I read this quote from a Tableau user at the Bergen Record via Jewel Loree & Dustin Smith on Twitter: “Data is humbling, the more I learn, the less I know.” That’s been true for me as well!