MediaLAB Amsterdam is now the Digital Society School! You are viewing an archive of MediaLAB projects. Content on this website may be outdated, and is without guarantee.

Transmedia Analytics

Team

Stefania Bercu

Embedded Researcher

stefaniabercu@gmail.com
Yannick Diezenberg

Designer

yannickdiezenberg@gmail.com
Geert Hagelaar

Designer

info@geerthagelaar.nl
Sieta van Horck

Researcher

sietavanhorck@gmail.com
Anne van Egmond

Researcher

vanegmond.anne@gmail.com

Commissioner:

Description

Visualizations as knowledge machines: ‘BounceRates’

Yes, it’s your lucky day! It’s time for ‘Visualizations as knowledge machines’! Today we’ll focus on the concept of ‘BounceRates’.

Bounce Rate is the only metric in Google Analytics that people want less of; lower bounce rates, not higher, and fewer bounces, not more. Although it is one of the most commonly used metrics to measure the performance of a website, there is also great confusion surrounding this concept.

Google Analytics defines the Bounce Rate as “the percentage of single-page sessions (i.e. sessions in which the person left your site from the entrance page without interacting with the page). From this point of view a bounce seems to indicate users that are not engaging with your content. The are probably not satisfied by the page they landed on or the content didnt drive them to click through. A bounce connotates something negative; someone who immediately left. But this is not always the case.

If a visitor only viewed one page he is registered as bounced. But Google Analytics does not record time and qualifies the visit as a zero second visit. While this might have been fifteen minutes.  A possible solution for this is specifying it in an custom report, for example by stating that users that stay longer than fifteen  seconds should not be registered as a bounce.

Therewith it not always appropriate to define a bounce as something negative. People that follow a blog are most likely coming back in a short period of time and the information they seek is most likely on the page they land on. For this type of content it is not necessarily bad if people not further dive into your website and leave. In fact it could also indicate that information is so clearly available that they didn’t need to. Likewise, a low Bounce Rate does not necessarily mean that people happily engage with your content. If low Bounce Rates go hand in hand with high numbers of Page Views, it could indicate that users are unable to what they are looking for even after searching and are leaving your site unfulfilled. In other words: the definition of a bounce remains unclear and its connotation needs to be carefully deliberated, which is highly dependent on the type of content a website concerns. Based on this we have asked ourselves what could be a definition of a bounce in the scope of The Last Hijack.

Starting from Google Analytics’ definition of a bounce there were a few points where friction arose immediately. First, there is the fact that in the case of The Last Hijack, the whole documentary consists of one page (by switching between video’s, perspectives, additional information etc. one stay on the same page without opening a new tab). Therefore, people who fully watch the intro and keep on watching (without interacting by clicking) and leave after seeing the whole documentary would also be considered as a bounce. Second, the users that immediately leave after skipping the intro are not considered as a bounce, while this might be more alarming than user that not interact with the page (as previously described: they are  considered as a bounce).

Bounce Rates are not longer used as point of measurement in the scope of The Last Hijack. One could say that it is replaced by ‘went away’ which is a specific user segment within the ‘intro state’. Which tells the amount of users that  ‘fully watched’ or ‘skipped’ the intro or ‘went away’ from the page while watching the intro. By specifying a bounce in this sense it concerns users that not interact with the content after the intro; not because they didn’t click, but because they never made it to the ‘main content’ in the first place.

Talk to you soon!

Visualizations as knowledge machines: ‘engagement’

Hello world!

Let us introduce to you a future returning item on our blog: ‘visualisations as knowledge machines’. As you know we are focussing now on Submarine’s ‘Last Hijack’, which has been launched almost a month ago. After cracking our heads again with correlating we are now visualising like crazy. At first you would say that this is all about making it look fancy, but there is much more going on behind scenes. A lot of thinking should be done on how a visualisation is more than a nice image, it should display meaningful and actionable information. Every post considers a different discussion about a certain term. Today we are kicking off with: ‘engagement’.

In order to define a way to measure a broad definable term as  ‘engagement’, it should be considered what engagement is for this specific (Last Hijack)  kind of content. What we know for sure is that The Last Hijack is an interactive documentary. But ‘interactive documentary’ is a rather broad term. Therefore we characterise The Last Hijack’s content structure with reference to Sandra Gaudenzi’s interactive documentary genre taxonomie (2009).  She distinguishes four types of interactive documentary; ‘The conversational mode’, ‘experiential mode’, ‘the participatory mode’ and the ‘hyperlink mode’ based on their ‘modes of interaction’. In other words: different interactive structures are the basis for different interactive documentary genre’s. For now we will not go into a detailed description of all of them, but instead dive into the specific mode that concerns the Last Hijack, namely the ‘hyperlink mode’.

The ‘hyperlink mode’ could be best described as a closed video database. The user had an explorative role in the sense that they can navigate through the database by clicking on existing capabilities. The kind of structure encourages the user to not watch the film in a linear, fixed way, but to choose their own path through the story.The closed nature makes it the perfect fashion for the author to control, while leaving the user to decide on how he wants to receive the story. This control of both sides makes this form of  interactive documentary the most commonly used. The Last Hijack consists of one page but there is the possibility to navigate through a timeline below the film and read additional information while watching it. But in essence interactivity is  quite limited; the user can navigate through the structure, but can not create it. The user constructs an individual ‘story’ that consists of the segments which are selected during the navigation process. The larger the database, the greater the chance that there is a unique experience of the story is given. Or in other words that there is been chosen an unique navigational path through the story (Lister et al 22).

Therefore we defined the indicators for an ‘engaged’ user as the following:

  1. Great amount of switches  between different videos indicates that a user is actively exploring the content, which corresponds to the aim of the content within ‘hyperlink mode’. Therefore we define the first level of engagement indicator as a lot of different video’s watched.

  2. The second level of engagement considers a stronger level of engagement, namely that the user has watched a high number of different video’s AND watched a relatively high percentage of the content of those video’s (the difference between leaving a video halfway and watching it until the end).

  3. The most intensive level of engagement for the kind of content Last Hijack considers is the combination of user watching a high number of different videos, watched a relatively high percentage of the content of the video’s AND has switched a lot between perspectives.

    Next time we will discuss ‘popularity of a video’. Talk to you soon! 

What’s next?

So last week we met with Submarine again and presented our final visualizations and dashboard for Unspeak. Afterwards, we discussed some questions concerning our work for Unspeak, which mostly were about highlighting problem areas. As we are not the producers of their interactive documentaries, it is very hard for us to determine what exactly would indicate a problem. Therefore, we agreed that the visualization will indicate possible problem zones, instead of highlighting individual datapoints.

Furthermore, we discussed that in order for the visualization to be easy to understand, it is always wise to stay close to the actual shapes of the visualized objects. For instance, we used bubbles to visualize screen resolutions, while screens are obviously square shaped. Using the same shape as the concerning object in the visualization makes them more intelligible for the reader. Overall the meeting went very well and we were happy to find that the guys from Submarine comprehended the work we did on Unspeak and also pointed out a few points of improvement. Note that we decided that actually programming all of the back end stuff (retrieving all data from Google Analytics) and the visualizations for Unspeak will be a very time consuming job. Seeing the fact that we only have two months left, we agreed on documenting all of the work we did for Unspeak into one comprehensive report. If Submarine would like to take up this project somewhere the road, this report will allow them to.

Another reason for not programming Unspeak, is the fact that their new interactive documentary Last Hijack will launch this week. This interactive documentary differs quit a bit from Unspeak and therefore will allow us to capture more data. Not only will we use the correlations that we already used for Unspeak (see below) but for this project we will focus more on capturing custom events. In order to capture this data, Emiel den Tex already did a lot of programming. Using the program Greasemonkey it is possible to program custom dimensions, events and metrics, which can be queried in the Google Query Explorer when captured.

We have now started with formulating meaningful questions, which will help us in order to form correlations which we will then visualize. The data for Last Hijack will be categorized in the same way as we did for Unspeak, namely: Behavior, Acquisition and Technical. Also, we will start an referral research which will focus on examining the internet buzz around Last Hijack. How is the discourse around the project and how is it being spoken about? The research will be divided into two parts; the conversation on Twitter, and using a tool called Google Scraper, we will investigate how different websites write about the project. We are very exited about this, but in order for there to be internet buzz around the project, it should be launched first.  Wel’ll keep you posted on that! So, for now, back to correlating. 

Its been a long time …

Hello world! Its been a long time. Definitely about time for an update, because we have not been sitting on our hands.

In our last meeting with Submarine it became clear to us that we should not focus on giving an overview of all available user data, but that we should focus on detecting potential problems. Instead of redesigning the Google Analytics visitors flow, we have now shifted our focus on concrete questions the visualisations should answer. Therefore we decided on 3 categories and linked these to three main questions.

The main question that goes with the BEHAVIOR category is how are users exploring the content and what are the top pages for driving or stopping exploration of the content? This category is all about page performance. You should think of  the entrances and exits per page pageviews, the average time that is spend on a page and the number of pageviews per visit.

Within ACQUISITION the focus lies on the question what the process of user acquisition is and from which referral sources the most/least users come from. Information about referral performance is central here. In other words here we look to the influence that certain referral sources have on the way people engage with the website.

The main question within TECHNICAL is if we can identify technical problems with watching the documentary?  The focus here lies on people that drop off and detecting the causes of this. For example by looking if high numbers of people that leave are linked to certain browsers and screen resolutions that could indicate that there are technical problems that cause drop offs.

By defining sub questions within every category we started correlating available data in order to extract answers. When we had found meaningful queries, next step was visualising it in a readable manner. We realised that one of the big problems with Google Analytics is that you need to spend a lot of time with it to understand what you need to look for. To save you time and work our main demand is highlighting relevant information and making the biggest problems stand out. Therefore we defined problem indicators and best-case scenarios for each visualisation, which we will elaborate later in more detail. For now we will discuss 3 visualisations.

VISUALISATION 1 [BEHAVIOR]: Page performance: pageviews and average time on page
visualisation 1 - blog-01

Explanatory note
This graph shows how much time users spend on a page (on average) OR  by using the toggle, how often this page has been viewed. Pages that are highlighted with red indicate “under engaged” content and might be worth looking at.

*Note
The sources listed on top is source generating highest number of visits, list is descending.

Problem indicators
– Low average time on page
– Low number of pageviews
 Extra strong indicator: the pages that are both listed in the top and highlighted in red (since the pages are sorted on visits and pages with most visits are therefore listed on top).


VISUALISATION 2 [ACQUISITION]:
Referral performance: Volume & Bouncerates
Presentation visualisation 2-01

Explanatory note
Per referral source: how many visitors leave the website without any interaction (bouncerate). This helps extract which sources drive traffic to your website and also how many of the visitors leave without interacting with the content at all.

Problem indicators
– high number of visits and high bouncerate (visitors that leave the website without any interaction)
– low number of visits and high bouncerate (visitors that leave the website without any interaction)


VISUALISATION
 3 [TECHNICAL]: Country performance: volume & bouncerate

Explanatory note
This graph shows which countries are driving the most traffic, and how much of that traffic bounces without interaction with the site. If problem indicators show an issue, you can select a country and see list of providers in that country to extract whether the problem is with one specific provider.

*Note
high bouncerates might indicate language barrier.

Problem indicators
– Countries with high number of visits and high bouncerates (visitors that leave the website without any interaction)
– Extremely high bouncerates (visitors that leave the website without any interaction)

But there is more. We also started thinking about how the tool should look itself. In the sense of a dashboard. Big big thanks to Tamara (one of the programmers at the MediaLAB) for her endless patience in teaching us programming basics. We proudly present you the result:
Note that this a screenshot; its interactive for real!

Schermafbeelding 2014-04-23 om 14.36.18

We have all presented this to Submarine due the 17th of April, which turned out very well. They liked what they saw; we are definitely in the right direction. Talk to you soon about what’s next …. 

 

Expert Interview

Schermafbeelding 2014-04-22 om 17.34.46

Short bio
Gabriel Colombo holds a Master degree in Communication Design from Politecnico di Milano. He has a lot of experience in designing visual tools to facilitate academic and market research projects and likes to focus on data visualization, infographics and visual storytelling.

Gabriel has worked at “The Visual Agency”, an Italian agency that a focus on infographics. Therewith, he also often collaborates as a visual d    esigner with the Digital Methods Initiative. He is ‘a big fan of excel files, old maps and typographic ligatures’(MediaLAB 2014).

The right data is captured, what’s next?

“You start trying to find out which types of visualization works and which don’t. You can simply do this is in Adobe Illustrator by using the basic charts from the Graph Tool. If you see directly that the shape doesn’t show differentiations within the data, you’ll have to come up with a solution or another type of graph”. Gabriel notes that it helps to write the problems down you stumble on, so you can adjust them at another moment. According Gabriel, it is also important to skip working on the aesthetic parts during the first phase; “You only want to know what works and what doesn’t. Coloring things and making labels look pretty is a real waste of time when you are not going to use the graph later on”. For this first phase of visualization, Gabriel refers to a couple of tools you can use:
– Adobe Illustrator: Illustrator contains a graph tool. Within this tool you can easily import data from datasets and export it in multiple available types of graphs. Next to this you can easily manipulate the graphs. Though, there is a disadvantage of this tool is. When you start manipulating the graph by for example scaling, you are no longer able to change the data.
– RAW: RAW is a free online tool which enables to test your dataset in a visualization in just a minute. When you import the data (just by copy-pasting from excel) you can easily connect variables to axes, labels, etc. by drag and dropping. When you are happy with the result, you can export it and tweak it in Illustrator.
– Plotly: Plotly contains the same ease of use as RAW but also creates the option to make you’re visualization interactive. You can easily embed it or even tweak the output in API’s for Python, Arduino and other code languages.

What do you do if you find out that there’s too much data to show?
“When a graph becomes too chaotic caused by the amount of data, you can try to find a way to aggregate certain data. In this way the information that has to be shown remains clear and patterns or interesting peaks are still visible”. This also contains a negative side-effect. Applying this rule also hides possible smaller problems. It needs a really critical reflection.

Do you have some tips about using labels, colours, etc.?
“You are lucky you have to make interactive visualizations because you can hide, show and sort information that is less essential in the default state of the visualization”.

A way to design interactive visualizations in a static tool is possible when you make use of multiple artboards and look how the different states would look like. For example when you click on a certain element, the second artboard has to show detailed data about it.

What do you have to say about the right way of using colours?
“It’s important to look at the data ranges of your data set. If you have a variable with a minimum and a maximum you can pick two colours, the hues between those two colours will show the values of the data in between. Watch out with gradients, if you don’t have to use it to show values leave this out of the visualization”.

Gabriel gives the tip; when using many categories you can use ‘random’ colours which will show the differentiations. For example, use different colours on a map to separate different countries from each other.

Do you have any experience with combining different kind of charts?
“You have to be careful with it, combining different charts can create a chaotic visualization. In your case it is better to use multiple graphs, then combine them all in one visualization. The use of interactivity gives you a lot of possibilities to make a good overview of the data you want to show. Combining charts will make it also difficult for the programmers.”

In our certain situation, we work with programmers whom not have a lot of experience with  D3. Likely, they have to depend heavily on existing visuals models and code and tweak those.

How do you test your visualization?
“Print your work and write down the problems you come up with, let your environment give feedback. Make a paper prototype can be a way to test it with your users. Take the time to come up with the right questions you want to ask. By creating a kind of user scene that will represent a real situation you know your work provides enough information. You have to ask yourself ‘What is the problem of the user?’ and ‘Will the visualization give an answer to this question?”.

Let’s correlate!

So, as you know meeting Submarine was an important turning point for our project. After letting it all come down, we kept up our courage; it was time to identify a new appropriate course of action. This time the starting point turned out to be concrete wishes from Submarine, in terms of what questions they want to answer by consulting the future data tool. Based on this we set categories – drop offs, sources, heat maps, demographics, Last Hijack – where we connected to tangible questions. For example, concentrating on drop off should not only answer that people drop of, but also the reason why people drop off. This question than can be approached from different sides. By thinking of what could be meaningful to know we dived the categories in sub questions and started to correlate data from the Google API in order to find ways to extricate answers. At first, by writing possible combinations down and afterwards testing it in Google’s Query Explorer. Which is an interactive tool to execute Core Reporting API queries, without actually coding it.

This tool lets you play with the Core Reporting API by building queries to get data from your Google Analytics. You can use these queries in any of the client libraries to build your own tools. By combining data we have tested if it was possible at all to get the data we would like to have. To give you an idea of how this works, a shortened explanation will follow. This is how the Query Explorer looks like:

Schermafbeelding 2014-03-25 om 14.50.47As you can see there are different parameters. The ‘dimensions’ parameter breaks down metrics by common criteria. You could consider this as the character for something you measure. You could for example insert ‘ga:browser’ or ‘ga:city’ in dimensions in order to break down for example page views of your site, which would be more interesting than just seeing the numbers. The ‘metrics’ parameter then can be defined as the aggregated statistics for user activity to your site, such as clicks or page views. If a query has no dimensions parameter, the returned metrics provide aggregate values for the requested date range, such as overall page views or total bounces. However, when dimensions are requested, values are segmented by dimension value. Any request must supply at least one metric (with a maximum of 10 metrics); a request cannot consist only of dimensions.  The difficulty here is that a metric can be used in combination with other dimensions or metrics,  but only where valid combinations apply for that metric. With the ‘segment’ parameter you can specify a subset of visits. The subset of visits matched happens before dimensions and metrics are calculated. With ‘filters’ you specify a subset of all data matched in analytics, for example ga:country==Canada. And last but not least is the ‘sort’ parameter, that stands for the order and direction in which you want to retrieve the results, based on multiple dimensions and metrics.

A specific need for Submarine is to focus specifically on the first 30 seconds of the interaction with the documentary. The would like to know what is exactly happening within specific timeframe. We tried to do this by combining different data and filters, but the Query explorer gave us a hard time doing this. We tried to filter as ‘timeOnPage <=30’, as well into ‘segments’, while constantly recombining different dimensions and metrics.  Luckily we met Emile on Friday.  He is the programmer who implements the codes in order to distillate the data out of Google Analytics API. We discussed the difficulties we faced and tried to solve this together. It turned out that the ‘timeOnPage’ wasn’t the right metrics to use. Instead we should use the ‘visitLenght’ with a quit specific filter we didn’t know of. To give you and idea: one of our sub question was ‘how many visits come from a certain device and how many drops-off are there in the first 30 sec.? The right query looks as the following:

Schermafbeelding 2014-03-25 om 14.57.40So, we spend quite a lot of time playing and testing with the Query Explorer. We discovered more or less what is possible and what not, so it was time to bring all this together with the correlations made by Stefania and deciding what questions and correlations are going to be priority.

After the meeting with Emile, we all sat together and agreed that it was time for a new plan. Last Hijack will be released somewhere around the first week of April and this will provide us the opportunity to analyze the data that Last Hijack produces. This means that it now is important to decide what data we actually want to capture and analyze for Unspeak, but note that these correlations are generally applicable for interactive documentaries (and therefore, also Last Hijack). Since Las Hijack will be released somewhere around the first week of April, this only gives us approximately 3 weeks to finish the correlations for Unspeak  and that is why we decided to focus on six custom reports which we will translate into visualizations. Below you can find short overview of the correlations that will be made and the questions we will attempt to answer on basis of the correlation.

1A. In which countries are people having the most technical problems with watching the documentary?

Country / Visits / BounceRate / PageViewsPerVisit / AvgTimeOnSite

The accompanying visualization for this correlation should contain the option of zooming in on a country that, for instance, has a high bounce rate. In order for the viewer to see if this I maybe caused by a certain provider, the viewer can select any country and see which providers are causing problems. The appropriate question and accompanying correlation for this is:

1B. Are internet providers in a certain country causing problems?

Country / NetworkLocation /VisitBounceRate / AvgTimeOnSite / AvgPagePerVisit / Visits

2. Is there a problem with mobile browser, screenresolution or the loadingtime?

MobileDeviceInfo / Browser / ScreenResolution / Visits / VisitBounceRate / AvgTimeOnSite / PageViewsPerVisits

3. Is there a problem with a certain operating system, screen resolution or browser?

OperatingSystem / Browser / ScreenResolution / Visits / BounceRate / AvgTimeOnSite

4. What source(s) give us the most/least engaged* users (* in terms of page views and time spend)

FullReferrer / Visits / PercentNewVisits / VisitBounceRate / PageViewsPerVisit / AvgTimeOnSite

5. How well are spate pages doing?

  • How many people leave from a particular page?
  • How much time fo people spend on the page?
  • What’s the chance that they enter on a particular page

PageTitle / Entrancerate / ExitRate / PageViewsPerVisit / AvgTimeOnPage

6.  Segmenting age groups according to age brackets:

  • Where do visitors come from (what’s their referralpath?)
  • What are their interests?
  • How much time do they spend on the website (per gender)
  • What are the best references for certain age groups with certain interests?

VisitorAgeBracket / Gender / Interests / RefPath / BounceRate / PageViewsPerVisit / AvgTimeOnSite

Unfortunately this last custom report is not possible yet since we don’t have demographic data from Google Analytics due to a certain threshold for a certain amount of users (and we do not know what this threshold is).

The rest of the custom reports have been tested in the Google Query Explorer and the data these tests produced have been exported into excel files. Although we are still busy with figuring out what these custom reports exactly tell us, the next step will be trying to visualize this data into simple visualizations.

Infographic Congress 2014 – Zeist

Homo Infographicus

Tracking your own life, personal data nightmares, improving lives with data and even duck face wars in ‘selfiecity’; all subjects that came across during the seventh edition of the Infographic Congress, held on14 March 2014 at Zeist. A day that was al about using big data in multiple ways, for different purposes and accordingly visualized in manifold forms. Guided by Petra Grijzen, a fully loaded Figi Theather was ready to get inspired.

The day started with a great kick off by Nicholas Felton aka ‘Mister registering it all’. He showed that not only big companies are in charge of your data but that you can also gather it yourself and doing pretty cool stuff with it. By gathering data that he produces in daily routines, he designed his now-famous ‘Feltron Reports’; a translation of numerous measurements specific to his own life, into a wide variety of graphs, maps and statistics that reflect his year’s activities. This is what he calls ‘personal annual reporting’, the distillation of data out of your own life. He discussed different approaches of doing this. One of them is the archeological approach, or in other words using sources of data after an activity, which formed the fundament for his first personal report. Books he read, restaurant he has been, trips he made, music he listened to, the amount of coffee he drank, the photo’s he took (1% were of is cat), he all summarized and visualized it. The second way of gathering information about your life, he called ‘hoarding’: documenting as much information as possible. This turned out to be a lot of work to do manually, so he developed an app that would help him with documenting his life, called; the Reporter App, with which you report activities only on certain times. While assuming that (maybe) his friends and family would be interested in this, it gained much more attention than he thought; it went viral.

In contrast to Felton’s argument Jack Medway (a.k.a. John Grimwade) talked about ‘The personal data nightmare’. He argued that it simply useless to document everything in your life, or in his words: “just because you can does not mean that there is any meaning in doing this”. Medway simply does not care about this overload of meaningless data and suggests that in this information age you should only use simple and useful visualizations. Medway underpins his argument with two different chapters of visualizations: ‘No thanks’, and ‘Yes, please’. Within the ‘No, thanks’ chapter he argues that although visualizations can look very pretty, there are way too much infographic designers that only focus on the esthetic parts of ‘dataviz’ rather then actually represent data in a meaningful way. In the ‘ Yes, please’ chapter Medway argues that in order for information to be meaningful and readable it should be presented in clear and simplistic way and easy to read visualizations.

Yael de Haan focussed in her presentation ‘Every picture tells a story’ on the process of making infographic it self. During her research she identified different sorts of collaborations between infographic designers and their clients. De Haan identified four collaborations, which all focus around one central problem: ‘Which story do I want to narrate to the viewer’. Every collaboration between designer and client works in the same order, namely: identification, coordination and reflection. De Haan argues that within every visualization project the designer and the client should always asked themselves ‘what does image actually add to the text’.

FacebookDanceA whole other way of looking at data was provided by Stefanie Posavec. Posavec is into data illustration; using data as inspiration for art. She summarizes her conception of data illustration as ‘Communicating a message that goes beyond the message found within this data’. One example of her work is the so called ‘Facebook dance’, which is based upon love relation between couples on Facebook. She attempted to translate a ‘virtual dance’ existing of shared posts, likes and comments, to an physical dance between the lovers.

While the morning was considered as the more theoretical part, after lunch (with a great lunch, including sausage rolls!) a few specific cases were shared with us. First Thijs Niks,  told us all about moving ‘from paper to pixels’, where the current trend from traditional newspapers that are offering a digital version was the central theme. Based on his experience in a project he did for NRC, he  emphasized the importance of a good interaction design when translating concepts into it’s digital version, rather then just copying content to a online environment. Since you are not only competing with other newspapers, but also with not perse news related platforms like social media. You should consider its goal, divide your attention, break through the conventional reading methods and realize that digital environments have different rules.

Renato Valdes was presenting his ‘Daily 30 seconds’, that arose from the idea of ‘making moving more fun’. The fast-growing startup’s mission is to make people using technology healthier and happier, by giving them advise based on the data they produce. Their shared passion for health comes from two completely different backgrounds, but has been the driving force to establish the company. Renato once weighed 145 kg, but is so fallen that he weighs only 80 kg. (YOU GO RENATO!). Using passive tracking, the app calculates the average number of minutes the user moves a day and motivates him or her to the Daily 30 to do the minimum number of active minutes per day that a person needs to stay healthy.

In the third case the ‘LocalFocus’ was discussed; a service for local journalists that focusses on making big datasets accessible and understandable for local journalist so it can be used as source for reporting. Founder Jell Kamsma explained how national datasets are getting disaggregated in local ones that not only provides numbers but also a broader context. Via a intuitive, drag and drop interface its main goal is making data journalism and its new possibilities approachable for local journalists.

In the last part of the Conference Moritz Stefaner presented his work; ‘Selfiecity’. Stefaner focusses in this project not on the dataset that is given but rather uses visual content as data. For this project he gathered a lot of public selfies found on the internet and divided these into five different cities. He then tried to link these selfies to each other to try and find similarities between them, using them as a sociological research tool to identify changes in cultural values. A very interesting ‘digital humanities’ approach, you should really take a look: http://selfiecity.net/

Selfies

All there is left to say, is that we got very inspired by all these great presentations and now know a little bit more about what is going on in the world of Infographics. Special thanks to Frederik Ruys, for inviting us to this great Conference!
Talk to you soon…

Meeting Submarine

Last week, the 13th of March to be exact, we met up with Submarine for the first time. Armed with our redesigned visitor flow iterations, and a healthy dose of nervousness, we entered the office of Submarine. The table that we set at was fully filled with all of the people involved in the project. Which included Gijs Kattenberg (fronted developer), Christiaan de Rooij (interface designer), Yaniv Wolf (Marketing and Publicity), Marlijn Koers (production assistant), Stefania Bercu (our embedded researcher), Bernhard Rieder (project leader), Loes Borgers (our MediaLab coach) and of course the four of us. Although it was very good to see the actual faces behind the names that we’ve heard during the passed weeks, the meeting went a little bit different then we initially expected.

First, we presented the redesign iterations of the visitor flow that Geert and Yannick have been busy with the last few weeks. Using the redesign as an example we shortly explained why we made certain design choices and which features we’ve added to the visualization that Google Analytics does not include. Unfortunately, it became clear that they had something slightly different in mind. The guys from Submarine explained to us that it is not so much about redesigning the visualizations of Google Analytics, but rather designing a additional tool that catches correlations between data and data that is really specific to interactive documentaries. Less focused on measuring numbers and instead more focused on measuring user engagement through for example mouse movements, hovering and the interactions while watching the documentary. This means concentrating on specific elements instead of on a whole visitor flow. For example concentrating on ‘drop offs’ and finding out WHY people drop off by correlating to all different sort of data. This also means correlating for example the way people interact with stories to certain age groups.

In order for them to retrieve actionable information about this subject, we should focus on correlating different pieces of data and then asking aimed questions about what answers to these questions would provide meaningful information that Submarine can actually act on. This all might sound pretty abstract, so we will give you an easy example of what this new approach could look like. For instance, a correlation could possibly be made between the ‘referral source’ and the ‘time spent on site’. Subsequently a targeted question that would produce meaningful information for Submarine could then be ‘Where do visitors come from and what influence has this on the interaction with their creation?’. Say, the NRC refers to Last Hijack on their website, people that come via the NRC then would be very interested in interactive documentaries and therefore spent a long time on the website. This would mean that it would be wise for Submarine to invest into their relationship with the NRC because evidently this referral produces a lot of engaged visitors.  Also, Submarine indicated that they would like us to focus more on individual problems like the drop offs of certain pages and/of episodes, instead of providing an overview of the entire visitor flow.

This new approach should therefore focus less on the display of quantitative information and more on providing meaningful answers to correlated data and thereon embed this into clear visualizations. Although we were surprised by the fact that we have to adjust our approach to the project, we do feel like this could be a fresh start for us and that this meeting motivated us to challenge ourselves to get the best possible end product for Submarine. Talk to you soon!

 

 

Go with the flow

So, some serious work has been done this week! Time to tell you all about it.

While in the previous weeks discovering the broader context of the project was particularly high on the agenda, this week somewhat actual designing is in progress. The focus here is re-visualizing Google Analytics’s ‘visitor flow’.  This includes a lot of different features, so we decided to concentrate on specific functions within this flow. Namely the visualization of user paths per segment and visualizing number of visits. While our superfine researchers dived into literature in order to find solutions for the first problems we have identified, our top notch designers started of with a lot of experimenting with different forms.

While talking about visualizing numbers of visits in its most effective way, think of deciding how to display numbers in a meaningful way. For example using ratios, percentages, or averages instead of raw counts. Loose numbers tell you the amount of visits, but for example correlating them to a certain average this numbers are of greater significance; you can also see if they are above or below the average. When giving this numbers a proper shape, by visually encoding them, the graphic saves you time and energy by summarizing the meaning of a certain number.

Untitled-1      vizs

The visitor flow is based on a number of properties that also should be included in its redesign. First it’s divided into ‘interactions’, in other words into concrete actions which accordantly lead that the visitor to a certain part of the site. In this is for example specified as a certain episode or interactive element. As you can see in the above image on the left, this data can be displaced according certain categories. For example country, city, browser, city, language etc.to segments; in this case into ‘United States’, ‘Netherlands’, ‘United Kingdom’, ‘Germany’ and ‘Canada’.

Or for example displaying data specified on ‘browser’, this segments would be ‘Chrome’, ‘Safari’ etc.  Though its clear in the ‘startpage’ column what the amount of visits from a certain country is, all countries are  mixed up as a total in the ‘first interaction’. After the first interaction it doesn’t become clear anymore what amount of visitors come from which country (segment). In our opinion, when selecting a segment it should be possible to track this segment throughout the whole visitor flow. Also, the visualization should provide the opportunity for the viewer to have an overview of the visitor flow as a whole. Yet, it is not possible to view the paths per segment individually within the general overview of the visitor flow. This is a important aim for the redesign, but turned out to be very complicated. Things we kept in mind while designing (based on research) were things as ‘visual hierarchy’, because to many lines obscure the message.  ’In information graphics, what you can show is as important as what you hide” (Cairo, n. pag.). Therefore it could be useful to integrate the possibility of highlighting certain segments, keeping the secondary in the background.

But we are getting there and came up with a few potential solutions, as you can see in the right image above. Next step is testing them. Unfortunately, we still don’t have the actual dataset. Because this will take some time, we have created a ‘fake’ dataset for testing.

We will keep you posted!

 

 

 

 

Wake up, Kick ass, Repeat.

Read all about it

So, now we know everything about Google Analytics. Okay, not everything, but at least some of its basics. Though, next week we will have a speed course, facilitated  by a Communication and Multimedia Design student to learn even more of its basics. BIG HURAY! For him at least, since it is his week off. We are looking forward to this, because just like the old saying; Rome was not build in one day. Understanding Google Analytics has more to it than just clicking around.

But nevertheless, new sprint planning has been done! This week we have read all about ‘interactive storytelling’, ‘ user-engagement’ and ‘narrative visualizations’, and submerged ourselves into the world of interactive documentary to get an idea of what the different forms of interactive documentaries could look like. By brainstorming together this morning we were able to put all of our findings into an mind map. This helped us to find some common themes that cut through all literature and therewith gave us an deeper insight of what the fundaments for good user experience are.

foto (1)

Next week, in the second week of our sprint, we will focus on discovering the area of ‘data visualization’ and its characteristics. Not only by reading about it, but also by diving into D3 (JavaScript library for manipulating document based on data) with Tamara, attending lectures and trying to talk with big guys in the field. By the end of the week this will result in a manifesto with some inspiration as regards to data visualization, that will form the fundament for our design of the first prototype dashboard.

Talk to you soon!